The term Big Data has been bandied about since the 1990s. It was meant to reflect the explosion of data – structured and unstructured — with which organizations are being deluged. They face issues that include the volume of data and the need to capture, store and retrieve, analyze and act upon that information.

Technologies and techniques such as MapReduce, Hadoop, Spark, Kafka, NoSQL and more have evolved to help companies get a handle on their ever-expanding data. So, where does it all go from here? What will inform Big Data 2.0?

Not so fast, say several data experts. Not everyone is as far along the Big Data journey as you might think. For small and midsized companies that do not have copious IT resources and data scientists to enable them to take advantage of the Big Data technologies, it’s been something they’ve read about but haven’t been able to implement.

“Hadoop is much too complex for organizations that can’t afford large IT departments,” said Tony Baer, a research analyst at Ovum. “The next 2,000 or 3,000 Hadoop adopters won’t have the same profile as the first 3,000, who tend to have more sophisticated IT departments. [The newer adopters are] still trying to figure out the use case. They realize they need to do something, but a lot of them are like deer caught in the headlights.”

Amit Sharma, CEO of data driver provider CData, echoed that sentiment. “The Big Data term itself is slippery,” he said. “Many good definitions are out there, but the term is being extended to things that aren’t Big Data.” What Big Data simply does, Sharma explained, is help resolve scaling problems. But where the real challenges are, “Big Data is a solution looking for a problem.”

One of the key early tenets of Big Data was to use NoSQL databases, as SQL was seen as too rigid to deal with unstructured data. Now, with time to look back on that, some experts are saying that that might not be necessary. “There’s no reason why Big Data problems should be different than problems in the SQL world,” CData’s Sharma said.

SQL still relevant in Big Data
Big Data started at Google. As Monte Zweben, CEO of data platform provider Splice Machine, tells it: “They published a MapReduce paper and the birth of the open-source version of it, called Hadoop, emerged, and everyone jumped on. It’s revolutionary because it fundamentally enabled the average Java programmer – and later other programming languages — to use many computers, servers and GPUs to attack Big Data problems. And this was revolutionary. But as time went on, new inventions came up where people needed to do that more effectively – Spark was an invention that came out of the analytics world. Spark was an advancement over the original MapReduce. Key value stores emerged, like Cassandra and HBase that allow you to do the serving of applications. So you had innovations in analytics, you had innovations in being able to serve operational applications, you had streaming innovations emerging like Kafka. But one thing is true across all of these things … the low-level programming that was necessary to make these things work is no longer acceptable.”

“For it to be actually acceptable by the Global 2000 [companies], it has to be in a higher abstraction or language, and that we believe is SQL, the standard data language. You see many, many projects now putting SQL layers on top of these computage. We’re one of those SQL evangelists, but we’re not the only ones . Even Spark has its own SQL dialect in it.”

Zweben went on to explain that organizations attributed the scalability problem of relational databases to the SQL language, because SQL was so robust and comprehensive. “It lets you join together tables and lets you do transactions, and these are very complicated data operations. But people thought these databases were too slow and don’t scale to my Big Data problem, so let me go to these NoSQL architectures on the Hadoop stack,” he said. “But the problem was, they threw away the baby with the bathwater. It wasn’t SQL that was broken; it was the underlying architecture supporting the SQL. And that’s what our whole mission in life was at Splice Machine .. it was “keep the SQL; fix the architecture.”

Adam Famularo, CEO of architecture modeling company Erwin, said modeling “will become the heart and soul of your data architecture, your data structure, your data elements…”

Famularo said it all begins with business processes, which should then fit the data architecture. “Let the business lead the data architecture, which then needs data models to model the schema, to then your governance and your approach to governing that data. And that’s where the business comes back in, to be able to help define what the business infrastructure is, the business dictionary, straight through to the data dictionary. It starts with the business and ends with the business, and in between is a whole bunch of data structures that need to be put in place that are then monitored and managed throughout the enterprise, usually by the [chief data officer] and the CDO organization.

MongoDB’s CTO Eliot Horowitz noted that once data is written, teams don’t want to change it or rearchitect it. “Everyone always wishes they had a perfect data architecture and they’re never going to have it. It can’t really exist, in my opinion,” he said. “What really matters is, can you easily allow people to collaborate on the data, share the data in meaningful ways easily, while maintaining incredibly high security and privacy controls.

“The way I think this is going to go,” he added, “is you’re going to have data, you’ll have some database with things in it, and you will configure rules such that different people can see different things, but then you can query that data without having to copy it or move it, and you can just decide who you want to share different things with. If you’re in health care, you can share certain things with insurance agents or insurance companies, or certain aggregate data with researchers, without having to give them a copy of the data, and without having to write a ton of really complex logic. It’s a pretty different kind of model, more like a federated model. The trick there is to get security and privacy done right.”

Where do we go from here?
Machine learning. Data pipelines. Multi-cloud implementations. Containers.

All of these will play a larger role in how organizations analyze, sort and deliver data to applications.

“Taking advantage of Big Data analytics and taking advantage of machine learning, and AI, is certainly very important for most organizations, and there are tangible benefits. I just think that basically organizations are going to need a lot more guidance, which is why you see more guided analytics, and why I expect that implementations are going to trend toward managed implementations in the cloud – basic managed services,” Ovum’s Baer said.

There is a caveat, though, with managed services, Baer cautioned. “A lot of organizations, as they go into the cloud and start using managed services, they’ll need to make the decision of how dependent am I going to be on this single cloud vendor and where do I insulate myself so I have some freedom of action? And do I get my managed services from a third-party so it’s transparent? Will it abstract me from Amazon so if I decide I want to run elsewhere I can? In a way, it’s almost like an enterprise architecture decision… where do I have some insulation between us and the cloud provider?  Or are we going to the whole Amazon stack? It’s a sleeper issue.. it’s not going to all of a sudden be headlines next year, but I think a lot of organizations are going to start seeing this stuff.”

As Manish Gupta, CMO at Redis Labs, pointed out, complexity in the data space is only growing. “It’s not a swimming pool of data anymore, but an ocean,” he said. Handling data in real-time needs to be a foundational element to any data strategy, he said. Bots will be required to handle the flow of data, and organizations will have to decide how much data can or should be analyzed. Gupta believes that “15 percent of data will be tagged, and about one-fifth of that will be analyzed.”

He also said that the life cycle of technologies will shorten. “Hadoop became mainstream over the past two years, and yet now some enterprises are skipping Hadoop entirely and going straight to Spark. And with Apache Kafka, perhaps you don’t need separate streaming technology.”

For the technology investments organizations are making today, Gupta said they can hope to get five years out of it. “Organizational structures need to be more agile because of the churn of technology.”

Machine learning tools have advanced a long way, noted Eric Schrock, chief technology officer at Delphix. Other tools are advancing just as quickly. In fact, he said, “people don’t even necessarily want to shove their data into a Hadoop data lake anymore. They just want to run Spark or TensorFlow or whatever directly on data sources and do whatever they need to do without having the intermediate step of the data lake. The quality of your analytics, the speed of your data science and the quality of your machine learning is highly dependent on your ability to feed data into it. Some of that data is from Twitter feeds and event logs and other things, and if your data is stuck in these big relational databases, you still have that same problem.”

Data for testing
Production-like data drives higher-quality testing, regardless of where you are in the software development life cycle. If you’re a developer doing manual testing, if you’re QA verifying a fix, if you’re running regression tests, if you’re doing system tests, the more your data looks like production, the better quality your testing is going to be.

Eric Schrock, chief technology officer of Delphix, gave the following example: “Say you’re using stale data, then the data may have changed in production since you ran it, and something that might have worked on data from two weeks ago or two months ago might not work when you actually roll it into production with the current data.”

Schrock added that common scenarios are customers who use shared databases. “Maybe they have four teams sharing a database, and on one of them, a developer actually corrupts the database, or drops a table, or does something horrible. Now three other teams that had nothing to do with that developer can’t do work. The thing they were using is now broken. So it’s pretty common for dev-test to have an isolated read-write environment. But that’s challenging and refreshing that is hard. Making a copy of a 50 TB database is not fast, using traditional tools.”

Online Predictive Processing: OLPP
Online Predictive Processing, as defined by Splice Machine CEO Monte Zweben, is essentially the combination of Online Transaction Processing and Online Analytical Processing.

Zweben explained: “First they take their old app and put it on an OLPP platform and it just works because it’s SQL. Then they add a little bit of predictive analytics to it, and now all of a sudden this old, stodgy SQL app has a component on it that might be using machine learning and is getting better and better over time. We see OLPP, because it’s SQL, as the on-ramp to AI for even the oldest of SQL applications out there.

“You get a SQL database you can connect to it with standard APIs like JDBC and ODBC, you get an Apache Zeppelin notebook available with it and you get machine learning libraries in process so that you can implement predictive capabilities, and you get streaming as well, totally embedded, so you can ingest either big batches of data that might be inventory downloads in a supply chain application from an ERP system, butyou also might get streaming ingestion like split-second transactions off POS terminals in retail stores. Those kinds of things now are all capable inside this relational database management system that is both good at transactions and power an application and make predictive analytics actionable.


OLPP gives you a relational database management system that’s capable of OLTP workloads, like powering commerce sites and mobile applications, at petabyte scale, it can get petabytes of data and look up a single record in literally milliseconds. You also get OLAP processing, so if you have petabytes of transactional data – you’re a credit card company and you have petabytes of transaction data in a database, and you get a call center that needs to look up a single record. That’s OLTP. Find out the average transaction per zip code for frequency of transaction and transaction size.. you must aggregate all that data together in a large data set, and that’s OLAP processing. You also get machine learning, streaming and the notebook.”

What’s the difference between data management and data governance?
Data governance is a subset under data management. Erwin CEO Adam Famularo said he believes “data governance is the core underpinning of the data strategy and data approach for a firm. As people start buying data governance-based solutions, they’re literally going to design their company around the data governance solution. It’s almost like the ERP solution for finance; data governance would be for data. It’s really the heart and soul that ties it all together. Now you’re defining what are the roles of the people who are going to touch your data. Who are the people that are going to update the data structures, and it’s all role-based management of your data. That’s the big change. Data management is the holistic play, so everything falls under that. It could be MDM, data lakes … every technology provider lands in data management in some shape or form.”