Perhaps the most intriguing part of the Cloud Foundry update was the intimation that a Cloud Foundry micro cloud was coming for developers later for this year. The micro cloud will be a version of Cloud Foundry that can be run on a desktop, giving developers an environment in which to test their cloud applications without having to leave their desktops.
The desktop-based cloud testing environment had previously been dominated by Eucalyptus, the open-source project aimed at recreating the Amazon Web Services APIs. Eucalyptus has made a niche for itself as a test bed for Amazon-targeted applications that need to be tweaked and poked before deployment. Using this cloud operating system, developers can test their projects without having to pay per CPU on Amazon.
Eucalyptus has also been making headway in the enterprise with its new Eucalyptus Enterprise Edition. The big draw here is support for Windows as well as Linux, and compatibility across multiple hypervisors. In fact, Eucalyptus has been in the works for almost three years now, making it the grandfather of cloud operating systems.
And if Eucalyptus is the grandfather, the new kid on the block is Nimbula. This cloud operating system was created by Chris Pinkham, the same fellow who led the construction of Amazon Web Services. He’s even following the same development model of using South African teams combined with Silicon Valley folk to create what is essentially Amazon 2.0. As the original architect of Amazon Web Services, Pinkham has added all the features he felt were missing from that system.
Nimbula allows users to spin up virtual machines both internally and directly inside of Amazon Web Services, all with a unified identity and management system. That’s a nice feature, when you consider Amazon’s EC2 suffered an outage most of the day, April 21. Bridging the gap between public and private cloud is a hot new theme.
OpenStack was updated in April to a version codenamed “Cactus.” Mark Collier, vice president of marketing and business development at OpenStack, said that the future of the platform will include the integration of public and private clouds. “Up until now, the big news has been all the industry backers getting involved,” he said.
“We think this is going to be the year of deployments of OpenStack. The next big phase is going to be connecting all of these clouds. You can do some interesting hybrid scenarios. We’re seeing that enterprises want to connect private clouds to public clouds for when they need that capacity.”
For Hadoop, the mother ship is still Cloudera, the company headed by former Sleepycat CEO Mike Olson, and the current employer of Doug Cutting, creator of Hadoop. On April 12, Cloudera updated its distribution of Hadoop to include a number of improvements.
First, it added support for the latest Linux kernels, speeding up map/reduce and file system I/O. This release also added new ODBC support for business intelligence tools, allowing that information to be poured into Hadoop with greater ease.
Still, Cloudera is not the only company in the Hadoop pool. DataStax, at the end of March, released Brisk, its own combined platform built from Apache Cassandra and Hadoop. DataStax focuses on Cassandra by day, and with the release of Brisk, the company has fused the two Apache projects, allowing for live NoSQL-hosted data to be analyzed within Hadoop. This makes the whole map/reduce process faster, and allows jobs to execute faster as well.
Matt Pfeil, CEO and cofounder of DataStax, said, “The challenge of ‘big data’ is twofold. The analytical side is well understood and served by Hadoop and Hive. However, we live in a real-time world, and the ability for applications to interact with big data at low-latency is equally important.
“Apache Cassandra was bred for big data [and] real-time scenarios, and using it to power Apache Hive and Apache Hadoop gives users a single solution that serves both needs.”