Apache Spark stole the show at the Big Data TechCon in Boston this week. Thanks to a keynote address from Spark’s creator, and a number of tutorials focused on the project, attendees had plenty of Spark information on hand to learn at the event.

While Spark was not the only Big Data topic being discussed at the show, it was the most popular. Mathei Zaharia, CTO of Databricks and creator of the Apache Spark Project, detailed the strengths of Spark and laid out some of the future additions the platform will see.

Zaharia said Spark is “a general cluster computing engine that is interoperable with Hadoop.” But that general purpose compute engine has four libraries layered on top that bring far more functionality to the platform than exists in vanilla Hadoop.

Those four libraries focus on SQL, machine learning, stream processing and graph processing. Because these are simply libraries on top of the general compute engine, and because Spark utilizes in-memory computation to speed up its processing, these four libraries can be chained together to construct multi-layer applications that include aspects of each.

This is all possible because Spark groups data into what are known as Resilient Distributed Datasets. For each job being done on the cluster, the data being processed is pushed into an RDD. Because Spark is aware of what it’s working on, and what jobs it has recently worked on, once an RDD is loaded into memory on the cluster, multiple actions can be taken upon it without having to return to hard disks, said Zaharia.

This means that performing multiple passes of processing across the data can be done very quickly, unlike in Hadoop, where each pass across the data reads said data off the hard disk every time, even if the data was just processed in the most recent query. RDDs are the secret sauce of Spark, and Zaharia said that when processing a large SQL datastore inside Spark, the entire database is simply stored as an RDD.

Framed
But RDD’s are soon getting some backup when it comes to data ingestion in Hadoop. Zaharia said the teams at Databricks are working to bring data frames into Hadoop. “Sparks data frames are basically collections of data records. They have a schema so they have known types and names. They look similar to R or Pandas,” said Zaharia.

As data frames are natively supported in Python and Java, Zaharia feels this addition to the Spark platform will improve productivity for developers. For example, when dealing with a SQL database, using data frames allows the developer to specifically pull out information, such as state names or zip codes, without having to delineate exactly where the table is stored, or having to tweeze out the specific data needed. Instead, data frames know the context in which their data is stored, and thanks to support in major languages, developers can replace a great deal of code with some very short, succinct lines.

Another improvement Zaharia mentioned was the work Databricks is doing to improve machine learning with Spark. While libraries already exist for processing machine learning inside Spark, Zaharia said that future additions to the platform will enable the creation of a machine learning pipeline. Rather than simply running an algorithm across the data, the machine learning pipeline will help developers usher their applications across the many steps required in between training an algorithm on data.

Zaharia said the future of the Spark platform, and of Big Data processing in general, will be heavily tied to hardware advancements in storage.

“I think there is stuff to do in the storage layer. One of the main reasons is that memory has become a lot cheaper, to the point where people are designing databases that assume everything will fit in memory. That’s a great design for the right kind of application. The other thing that’s coming up is non-volatile memory and [solid state drives]. They are somewhere between SSD’s and RAM: a little slower than RAM, but it’s persistent. The data structures and things you can do with them will be quite different,” said Zaharia.

Meanwhile, in the booths
Big Data TechCon saw many exhibitors showing off their tools and platforms for speeding application development. DataStax was at the show demonstrating Apache Cassandra. The Apache Cassandra talk at the conference was the most attended talk of the show, in fact.

Emcien was demonstrating its analysis engine, which can find patterns hidden in data stored in Hadoop or MySQL. DataTorrent was also showing off its data analysis tools, which offers real-time streaming analytics.

HP Security Voltage, formerly Voltage Security, was at the show to talk about its security offerings for Hadoop users. HP Security Voltage can manage and secure the data stored in a Hadoop cluster, thus preventing nefarious users from getting access to sensitive information that may be stored inside.

Pepperdata offered similar security controls for data inside Hadoop. Pepperdata can implement governance controls on Hadoop data to keep the wrong information out of the wrong hands.

Dataguise’s tools mask and monitor that Hadoop data, giving developers and managers a way to restrict access to portions of the Hadoop data set, while hiding others behind smoke and mirrors so analytics can still be run without compromising large swaths of social security numbers.

Actian was demonstrating its host of tools, and advocating for the use of SQL on Hadoop thanks to its highly optimized Vortex product. Actian also showed DataFlow, a tool for preparing data for analytics, and for running those analytics inside Hadoop.

Two schools were on hand to offer insights into their newest degrees. Northeastern University recently launched a graduate certificate and a masters of science in business analytics, as well as a graduate certificate and a masters of science in Urban Informatics. Brandeis University is launching a new masters of science in strategic analysis this fall. Brandeis’ courses are offered online as well.

Texifter demonstrated its text analysis tools at the show. The company offers methods for mining social data, and now allows developers to search the text of every single “tweet” published since the dawn of Twitter.

Fujifilm displayed its Dternity storage devices, which use high-density storage tape. Dternity systems can scale from 16 TB to nearly 1.3 PB.

MarkLogic showed off its XML database, which now supports JSON as well. The company has continued to grow and its NoSQL database has continued to gather users in Big Data and big government.

ClusterPoint made its American debut at the show. This Latvia-based company is staffed by ex-Googlers who returned to their home country. ClusterPoint offers database-as-a-service. Their database is easily scalable, ACID-compliant, and can handle unstructured data.

LexisNexis discussed HPCC, the high-performance Big Data analysis engine developed internally by the company and used to handle its data business queries. HPCC is available to external developers looking to build Big Data applications inside something other than Hadoop.

WebAction talked about its stream analytics platform, WebAction, that can handle high-velocity streams of data and perform timely analytics on them, giving business users direct insight into their operations as they happen.