On March 6, 2001, a specification proposal was born within the JCP. It was called JSR 107: Java Temporary Caching API (JCache for short) and it seemed doomed to languish within the JCP longer than any other specification proposal for the language.
But almost exactly 13 years later, in March of this year, the specification was completed thanks to the efforts of Greg Luck, and Oracle’s Brian Oliver and Cameron Purdy.
We caught up with Luck in his new job as CEO of Hazelcast, a company that, not coincidentally, offers a JCache-based in-memory data grid. He started out as CTO of Hazelcast earlier this year, and was promoted to CEO in June. Before that, he was CTO of Terracotta.
SD Times: What made you start working on JSR 107 after it being stagnant for so long?
Luck: I was a part-timer where someone paid me to implement the specification as it was. I implemented it and pointed out it couldn’t be done completely because the specification was incomplete. I said, “If you want to pay me to work on the specification…” They said no. But at Terracotta, once I got settled, they were happy to finish the specification. So while I was at Terracotta, I did that work.
In about October 2011, I started working more than 50% of my time on it. It was a much bigger effort than anybody appreciated. We went through some Oracle guys.
Oracle had created the specification, right?
It was started 11 or 12 years ago by Oracle. Whatever the original purpose was, it was lost. Cameron Purdy was specification lead, but he was busy as hell running Tangosol, so eventually, two and a half years ago, I had the time. Cameron wanted to do it, so we said we’ll put 50% project time in to get it done. We got started, then we got held up by legal stuff with Oracle, and then later Software AG.
Then Brian Oliver and I got going again. We finished the work in December 2013 and took a couple months to go through the process of releasing it.
How can people learn about using JCache?
We’ve created a website called JCache.org. Back in the day, Sun had a page where you could see a couple implementations of specifications. This is like that. We’ll be adding more implementations as they are released.
The other thing with JCache, because it’s new and the vendors have just implemented this literally weeks ago, a lot of people don’t know about it. It’ll be a big thing after JavaOne. This is a topic where the interest level will slowly rise as more vendors implement. It’s probably the most interesting thing going into Java EE 8.
Inside the Java world, implementations are available, but do you expect non-Java-specific data stores to implement it as well?
We definitely kept this in mind when we did the specification. I can see Memcached or a NoSQL implementing the main classes, like cache manager and cache entry, so you can do the basic operations across a wide range of implementations. If you want TCK compliance, then you have to go with a Java-based in-memory data grid to get full compliance. All those have proprietary APIs, and the whole point of JCache is that you can swap it out with no code change.
What have you been working on at Hazelcast?
We’ve been doing a big investment this year to raise the quality to enterprise grade. For version 3.3, the focus has been stability and quality. We’ve been rewriting the documentation as well. The plan is to come out with zero open bugs.
The next place we’re going to is high-density cache. As a rule of thumb, an untuned JVM will produce a 1-second-per-GB pause [for garbage collection]. A 100GB heap might pause for 100 seconds, and bad things happen when it does. This is meant that Java itself as a language is not really that suitable for doing maximum allowed pause limits for in-memory computing.
(Related: Other Java news, this time from Oracle)
The Java Non-blocking I/O stuff that was put in five or six years ago puts data in these direct byte buffers outside the Java heap area so you don’t get these pauses.
What we’re doing in Hazelcast is what we call high density. Our approach, rather than going off heap, is to create large byte arrays on heap. We can get hundreds of gigabytes in the JVM without going off heap. We have an on-heap slab allocator. We want to combine this with JCache, which is simple to use and widely distributed but also [supported] in lots of other things. We want to make it work great, and also make it high density so you can have terabyte-size caches.