Arun Murthy is a busy fellow. When he’s not acting as architect at Hortonworks, the Hadoop company he founded, he’s flying around the world giving keynote addresses. This is quite a long ways from where he was 10 years ago, working on Hadoop inside Yahoo.

But then, the future is, typically, uncertain. That’s why we sat down with Murthy to talk about the future of Hadoop and Big Data processing as a whole.

What is the next big focus for Apache Hadoop as a whole?
I think if you look at the big picture, Hadoop started off as map/reduce and HDFS. Things have obviously changed a lot. We’ve had things like YARN for a while now, so map/reduce is no longer the be all end all. We also have Spark and Flink, and a better Hive, and on and on. The infrastructure side of the Hadoop space is alive and kicking.

The infrastructure side is alive and kicking. The idea always was to let a thousand flowers bloom, and that has happened. It’s not just the open-source communities that have done this, either. It’s also other sorts of vendors, like IBM, EMC and SAS. These guys are taking their product lines and making them Hadoop-compatible.

(Related: How to get started with Hadoop)

That’s really great. If you look at Hadoop, we’re coming now to the end of the first big wave of Hadoop. The first wave has been about establishing technologies and making sure enough of the gaps are filled well enough so you can build apps on top of pure data.

As they start to build newer and newer applications, we start going from post-transactions to pre-transaction. Predictive analytics has been around for a long time, but with Hadoop, you can do analytics with very fine granularity. You can make every customer feel special.

What people have realized as they build more and more of these apps, a lot of these new-generation apps are primarily driven by data. You can build apps that delight and inform the end customer, but every model app we’re going to build is also a data app.

That’s one view of the world. If we look at the data-centric part of the world, for a time it was only one machine, then you had virtualization. Now you have things like containerization that are primarily driving efficiency.

Docker is the poster child of that. I think of these raw data apps, and Docker helps you build apps that are DevOps-friendly and repeatable. The Hadoop community should take a step back and say, ‘Instead of focusing on what’s the next HBase, Spark or Flink,’ let’s take a step back and say, ‘How do we make it easy for people to build these apps that allow you to take advantage of the data in your platform?’ ”

I would also say simplicity for developers and end users is important. Say the end user is a businessperson at an enterprise. If he or she wants to extract data from what he or she has, they have to use registries. There are all sorts of technologies that have to work together: Spark has to work with HBase and with NiFi. You have to put them together yourself, if you are the enterprise user.

If you are at Accenture helping people put these apps together, you’re doing a lot of grunt work. You really want to focus on the core business logic. What we want to do—what we have to enable as a community—is allow you to get off-the-shelf applications, sort of download and go on the platform.

You want to be able to download an app that does predictive analytics; there will be some amount of customization, but the base is available.

A lot of that would be at the very high level; you want it to look like an app you download and run on your platform. The same way with developers and solutions and software, now it has to run on a distributed cluster of Hadoop. It needs distributed data, it must obey the security model of your enterprise, and you have to have some data governance.

Finally, it needs to have a very user-friendly management console. We have all these in the platform now, but you as the enterprise business user have to put them together yourself.

What does this look like for the end user?
We as a community want to make it easier for these integrations to be done by someone else—to have them just be done.

Think of this as an assembly line you put together. Maybe it’s Docker containers. You have some simple controls to be able to launch these assemblies, and implement security governance and management in a simple management interface. You download a bundle and click ‘Go,’ and it should just go. If you can assume you have technologies in the open like YARN and HDFS, that becomes the equivalent of POSIX for the data world.

You have the Docker containers for the actual business process, then you have Ambari, which allows you to manage this. I should be able to download an assembly that you wrote, and I can modify the business logic, or I can decide that I don’t want Spark Streaming but I want Storm.

In the beginning, a year or so ago, we started this Apache Slider Project to make it easy to bring new apps onto YARN. If you blow that up and say it’s not just to bring an app onto YARN, we want to be able to post apps onto the Hadoop Platform. In the next year or two, we’ll spend a lot of time and effort getting that to work.

Slider was baby steps. We have a lot of work going on inside the community and so on.

What is the community doing to help simplify and unify the Hadoop ecosystem?
I think one way the community is doing that is having things like YARN and Spark APIs…but really to me, look at the end users as business users. The way to ultimately make it easier for them is to make products and solutions available out of the box so you don’t have to understand Hadoop at all. You download an app on Windows or Mac, you don’t care much about the underlying OS.

For the first two to three years, what we really wanted to focus on is making sure you can build any app on this platform. For putting together Storm and HBase and Spark, we want to make it an out-of-the-box experience.

I was talking to a Wall Street firm the other day. They’re trying to build something out of Spark to predict customer churn. Now the bank is hiring people who understand Spark, Scala and Hadoop. It’d be much better if that customer churn app was built by some third party out there, and they could download and just run.

A customer churn app would represent an assembly because it has to understand all these parts.

I think the call of action is, let’s focus as a community on making it trivial for people to get value out of data. Hadoop is less about technology, and more about applications of the technology. It’s a shift, but, if in five years from now all we focus on is building the next Spark, HBase or whatever, that’s going to cause more confusion than add value.

Innovation is important, but we have to pay attention to uptake innovation, rather than just the next API or the next storage platform.