With the release of Databricks Community Edition today, and the kicking off of the second day of Spark Summit in San Francisco, the Apache Spark project is a hot topic. Underneath the hood, the update brings many changes and improvements to the platform, and at Databricks’ Summit, these were discussed at length.

To get a better handle on the improvements, we chatted with Reynold Xin, cofounder and chief architect of Databricks, to discuss the finer points of the new version.

Tell us about the Dataframes API and the changes therein.
Initially, Spark was built with two focuses: One was performance, and the other was usability.

Spark programs need to be fast to run and write. That’s what created a lot of the hype. As Spark started growing, users coming from a Hadoop background saw that the Spark API is much simpler to use. These API-based programs could be done in hundreds of lines.

Once we started selling Spark to a broader class of users, we started to realize we have new users using Python. The Dataframes API came from that perspective. How can we create a developer API that’s much easier to use than ever for this new class of users who’ve never used Hadoop before?

(Related: What software is spreading at Spark Summit)

Within Python and R there’s a common abstraction called Dataframes. We said, let’s generalize it to work beyond what a single node can handle. After the first release in the summer of last year, for our Spark user survey, 60% of Spark users were already using the Dataframes API.

We got a lot of feedback about the Dataframes API. People love it. They don’t need to learn a whole new paradigm. One bit of feedback we got was that the people who typically use Java and Scala, two programming languages Spark supports, these users want type safety. The Python and R dataframes don’t have compile-time type safety.

So in the second half of 2015 we started thinking about how we can build to take advantage of the Dataframes API. We came up with this idea of the Dataframes API encoder, which masks the arbitrary user type into a type in the dataframe. In Spark 1.6, in order to create this Dataset API, we had to create a new one, otherwise we would break the old Dataframes API. In 1.6, the Dataset API is compile-time type safe for Scala and Java, and the Dataframes API was for Python and R, and was not type safe.

2.0 means we don’t have to keep strict backward-compatibility. We felt breaking the API could create a lot less confusion and create a more streamlined API in the future. So we have merged these into a single API.

Can you tell us about the Accumulator API?
The Accumulator API is used to compute metrics on data. How many bytes on what jobs? How many records has this read? What’s the average distribution time to process each record? In 1.x there is a somewhat complicated hierarchy. It’s a little difficult to understand how to create new metrics yourself.

For operations only operating on primitive types, like integers, we count records. The problem with the Accumulator API is it goes through the generic [Java] interface. It adds with the increment by 1 method, so it converts that number of a boxed object.

Now when you call “add,” it’s literally “add by one” rather than converting to some generic type and adding.

What’s new with these machine learning pipelines?
This might be the most interesting one for machine learning people. [Typically for machine learning] there are a bunch of different algorithms you run, then you get results and you have a model. The reality is, you probably have different people collaborating on this model, which may require moving the model from one language to another. Then, when you finish building your model, you want to push it into production. The pipeline persistence is a way to load it out and load it back in in whatever language you want.

What’s new with Spark Streaming?
Spark Streaming was first released in about 2013 as a way to unify batch and stream programming on Spark. It was the first framework to do that. Over time, we’ve gotten good user feedback on what users like.

The primary feedback is that it’s difficult to put streaming in production. It’s operationally difficult. We originally decided to treat everything as a stream, and if you treat everything as a stream you can treat a batch as static data that doesn’t get updated.

But after talking to a lot of users, they would build this new class of apps called continuous apps. Part would be working with data that never ends, and the other part would be looking up and integrating up and generating machine learning models. It might also be connecting with some transactional database.

It’s difficult to convert those things to work in a streaming fashion. Instead, we took a different approach. We’ve now tried to see if we can take a user’s program and have the engine optimize and incrementalize and parallelize this work so you can run never-ending streaming data. This is the idea of structured streaming.