As Apache Hadoop 2.0 was released, Doug Cutting, creator of the Apache Hadoop project and chief architect at Cloudera, was preparing to give the keynote at the Big Data Technology Conference in San Francisco yesterday. We caught up to him before he spoke and quizzed him about Hadoop 2.0, Hadoop 3.0, and the state of the Hadoop community.
Four years ago, you said that if Hadoop was the only Big Data processing platform in the market in four years, that it would have won, and would remain the de facto standard for many years to come. Hadoop 2.0 shipped this morning from Apache, and there’s really no sign of a competing product still. Does that mean Hadoop has won?
It looks like nothing else has appeared. It’s really become the de facto standard. I honestly expected Microsoft or Oracle or IBM to come out with something competing.
I think that’s a credit to the open-source methodology. It’s doing something everyone can get on board with. Apache tries to make sure we have projects everyone can support.
(More on Hadoop 2.0: Hadoop 2.0 comes, bringing YARN)
Hadoop 2.0 includes support for batch jobs outside of the Map/Reduce model. Was this something you ever considered while writing Hadoop?
We talked about it fairly early on. Arun [Murthy of Hortonworks] had this proposal to refactor Map/Reduce into a more general platform, and make Map/Reduce an application-level logic on top of that. At the time, it was like, “That’d be nice some day, but we need to get Map/Reduce working well first.” Arun held onto that dream, and eventually when Map/Reduce was stable and out in wide use, he went back and started pushing that agenda again, and now we’re there.
HDFS becomes highly available in Hadoop 2.0, yet for years HDFS has been ridiculed as a less-than-stellar file system, as compared to modern, more functional file systems. Do you think the HDFS updates in Hadoop 2.0, combined with the continued popularity of HDFS, is a vindication of your original designs?
I think it’s a vindication of the original Google design being a great starting point. As the strategy we went with—which was to get something that was working that demonstrated the scalability and utility, and not worry about having all the features from day one—I think people often get distracted by not realizing what is a critical thing. Triage is the term they use in medicine. A lot of projects don’t have enough triage.
Now we’re back, filling in the gaps. We’ve had to do a lot of work on security, and the same with the single point-of-failure problem. Rolling upgrades, snapshots and disaster recovery are all things we’ve been able to add after the fact. I think it’s roughly the order of features Google added. You need to make sure you’ve got the scalability right before any of it matters, and basic usability. If you haven’t proven that, then it doesn’t matter if it is secure or not.
But that’s been your strategy all along: Get it working, not fancy. Has this been one of your reasons for success? Ruthless focus on getting the important stuff done while ignoring some portions of functionality?
I think it helps if your strategy can be a grassroots adoption strategy, which I am very fond of. At the outset, there were no grand claims made [about Hadoop], there were no sales pitches. It was, “Here’s some stuff, try it out. If it does something useful, use it, if it almost does something useful, help us get at that last percentage.” People came to it with reasonable expectations. Now, we have the risk of hyping it.
#!
Cloudera was originally supposed to be a product company, but you’ve been forced by the market to offer lots of services and training. Has this shift been difficult?
I think we never expected training to be as big a part of our business as it has been. We think it’s a real complement to our business. We need to train people in order to get them to be able to use the technology, but it’s not something we do begrudgingly. It’s not our main business. It’s not the business we intend to be our growth engine.
But we’ve got a great training team that has taught 15,000 people so far. It’s something we’re happy to do. It also helps us have great documentation that goes hand-in-hand with people who can really explain the stuff. We also send our new employees through training.
(What’s all the hubbub over Hadoop, anyway? Zeichick’s Take: Ignore Hadoop at your peril)
Is training the way to solve the skills gap that exists now? Many organizations are having trouble finding Hadoop-competent engineers and developers.
Training is part of it. It gets folks started; gets them over their inhibitions. But as with anything, it is easier to grow people rather than hire them. You’ve already got people who know your business, and that’s the important thing. Knowing the problem space and the parameters, and having a good head on your shoulders, are important. Of course, training can help them get started.
How long will Hadoop remain the de facto Big Data platform?
The platform is destined to be the mainstay platform of data centers for quite some time. I’d say 10 years, easy. Ten years from now, we’re going to continue to see Hadoop gaining market share as the primary player. All the trends look that way. There’s nothing that undermines it. If there’s some need that developers have that Hadoop doesn’t fit, it’s flexible enough and loosely coupled enough of a platform that it can embrace the changes it needs. There’s nobody who’s going to be able to say no to that. They don’t have to abandon Hadoop because they can change Hadoop.
What do you think of the Spark project to build an in-memory Hadoop process execution model?
I think it’s great stuff, Spark. If we see a lot of people adopting it, we’ll support it. That’s the way HBase came along. Initially, it was out of scope, but our customers said “We need it.” It was a big investment in getting the developers on board, getting us up to speed and making HBase suitable and supportable. That continues to be a big investment. We also recently started supporting [Apache] Accumulo.
Look at Project Impala. That very much grew out of customer needs. Customers wanted to do interactive SQL queries, and we didn’t see anything out there. We knew from experience a good architecture for that at Google, so we set about building it.
The way Cloudera is prioritizing its engineering efforts is to require there to be a specific customer who needs any new thing that’s added. We make a laundry list of things we might do, and then we go and attach customers to it. The one with the most customers gets done.
If we see a lot of people demanding Spark and Storm, we’re going to pull those in, certainly.
(Speaking of Storm: Twitter sets Summingbird into the wild)
What will be the big additions in Hadoop 3.0, if you can speculate out that far? What types of things are still needed in Hadoop?
I think we’ll see a continued demand for high-quality multi-tenancy support. YARN gets us a huge step toward better support for multi-tenant, but it’s not the last step. It’s a complicated thing to do to really support a wide range of different applications. It’ll be an ongoing project to get things to integrate well and to have institutions really use one cluster both for production and research. When we can fulfill this promise, you can store your data once and bring different kinds of processing to it, and different parts of the organization can share a single cluster, with a single copy of the data.
Another obvious feature is support for transactions. I think, in the grand scheme of things, Hadoop will support online transaction processing. It’ll be a ways out, and it won’t be in Hadoop proper. But it’ll be somewhere in the stack. Perhaps it will be related to multi-data-center support. This might be done at a higher level, but it’ll be part of the ecosystem. If you look at [Google] Spanner, it is a transactional database that spans many data centers. They’ve got HDFS and HBase and higher-level tools that guarantee transactions to be written reliably in multiple locations.
#!
How is Mahout doing? It seems to have lost some of its shine.
I think Mahout is a library of different algorithms written by different people and maintained by different people, so the quality and consistency isn’t there. Some are of the algorithms are excellent and best in class, but sometimes best in class is in other projects. I think the original dream was it would become the home of all the best machine-learning algorithms, but I don’t know that that’s quite come to fruition. But also, people are regularly using Mahout.
(For more on the latest version of Mahout: Apache Mahout 0.8 released)
Hortonworks is pushing Tez as a transformative way to allow Hive users to work faster and more interactively with the data. What do you think of Project Tez?
Tez will make Hive queries faster, which will be a big improvement for a lot of folks. It’ll permit job flows that vary from Map/Reduce, and allow people to experiment with different tools and different data flowing through them. I think it’ll be a nice thing to have. Anything that adds a fundamental performance improvement can be a game-changer.
What other projects are of interest to you? What’s the next big thing in Hadoop?
We know what Spark and Storm are going to bring to the table. I’m really excited about getting search into the platform. Pretty soon, people will wonder how they did search without Hadoop. Predicting what the next huge thing will be is hard to say. A lot of what we spend time on these days is polish: filling in features enterprises need so they can handle their compliance issues. These are things that are somewhat boring, but important in supporting auditability.