The Apache Software Foundation has announced Apache Flink as a Top-Level Project (TLP).

Flink is an open-source Big Data system that fuses processing and analysis of both batch and streaming data. The data-processing engine, which offers APIs in Java and Scala as well as specialized APIs for graph processing, is presented as an alternative to Hadoop’s MapReduce component with its own runtime. Yet the system still provides access to Hadoop’s distributed file system and YARN resource manager.

The open-source community around Flink has steadily grown since the project’s inception at the Technical University of Berlin in 2009. Now at version 0.7.0, Flink lists more than 70 contributors and sponsors, including representatives from Hortonworks, Spotify and Data Artisans (a German startup devoted primarily to the development of Flink).

The project’s graduation to TLP after only nine months in incubation raises questions about not only its future, but also about the potential it may have for the evolution of Big Data processing. SD Times spoke with Flink vice president (and CTO of Data Artisans) Stephan Ewen and Flink Project Management Committee member Kostas Tzoumas (also the CEO of Data Artisans) about where Flink came from, what makes the Big Data engine unique, and where the newly minted TLP is going.

SD Times: Flink is an open-source distributed data analysis engine for batch and streaming data with Java and Scala APIs. In your own words, describe what Flink is as a platform.
Ewen: Flink, as a platform, is a new approach for unifying flexible analytics in streaming and batch data sources. Flink’s technology draws inspiration from Hadoop, MPP databases and data streaming systems, but fuses those in a unique way. For example, Flink uses a data streaming engine to execute both batch and streaming analytics. Flink contains a lot of compiler technology, using Scala macros, Java reflection, and code generation together with database optimizer technology in order to holistically compile and optimize user code similarly to what relational databases do.

Tzoumas: To the user, this technology brings easy and powerful programming APIs and a system with state-of-the-art performance, which also performs reliably well in a variety of use cases and hardware settings.

Talk about the origins of Flink and the inspiration for its creation. How, if at all, has the project’s core and focus evolved over the past several years?
Tzoumas: In 2009, researchers at TU Berlin and other universities started playing with Hadoop and asked themselves, how can we bring together knowledge from the database systems community and the Hadoop community in a hybrid system?

Back then, Hadoop was fairly new—only MapReduce and quite hard to use—while SQL databases were more established but clearly could not cover some new use cases. During the years, the team built a system that intelligently fuses concepts from the Hadoop and the SQL database worlds—without either being a SQL database or being based on Hadoop—called Stratosphere. The project gained momentum as an open-source GitHub project, and in April 2014 the community decided to submit a proposal to the Apache Incubator. During the years, the scope of the project of course expanded somewhat to cover both streaming and batch data processing, and to provide a smooth user experience to developers that use it, both for data exploration and for production use.

About Rob Marvin

Rob Marvin has covered the software development and technology industry as Online & Social Media Editor at SD Times since July 2013. He is a 2013 graduate of the S.I. Newhouse School of Public Communications at Syracuse University with dual degrees in Magazine Journalism and Psychology. Rob enjoys writing about everything from features, entertainment, news and culture to his current work covering the software development industry. Reach him on Twitter at @rjmarvin1.