The world increasingly runs on data, and that data is only expanding. Like the blob, it gets everywhere: storage systems, databases, document repositories. According to IDC, the world will hold 44 zettabytes of data by 2020, up from 4.4 zettabytes in 2013. That’s a lot of hard drives.

It’s also a recipe for development and administration nightmares. With Big Data come big hassles. Everything from the storage and backing up of all that data to the governance and security around those data sets can become a major impediment to enterprise software efforts.

Add to this the fact that the Internet of Things has already arrived, and you’ve got a major data retention, search and analysis problem just when it comes to dealing with logs. Never mind all the data your customers generate, those IoT devices are far chattier communicators than even your most dedicated of clients.

Mac Devine, vice president and CTO of emerging technology at IBM, said that this IoT-focused data world provides such difficulty for enterprise developers that they may end up focusing too heavily on a single aspect of the environment, rather than on the overall goals of their projects.

“There are a couple dimensions of the challenge here in terms of scale,” said Devine. “Many times, when people talk about IoT, their focus is on connectivity, messaging and queuing. They’re not focused on the real value of IoT, which is getting cognitive insight out of the data in real time. One of the problems we have is that not everybody has all the data needed to make real insights.”

The key is to remain focused on the overall goal, said Devine, particularly when dealing with large amounts of data, such as that which come from IoT devices. One key to success is having total control of the endpoints where the data is generated.

“When you look at the IoT space at scale, you need to be able to bake in self-management and self-configuration into the edges,” said Devine. “You have to be able to handle security in a different way than before because you’re dynamically interacting in a point-to-point fashion. You need to know, can this entity get access to this data? Can this data flow to this individual or not?”

Adam Wray, CEO of Basho, said that one major issue that has to be taken care of when dealing with large amounts of data is simply the process of preparing it for storage, processing and analysis.

“One of the things we’re seeing at scale is when you bring in all these disparate sets of data, you have to normalize them,” said Wray. “We’re seeing organizations going through efforts to normalize data and create ontologies on top so they can categorize them quickly and they can go where they’re needed.”

Normalizing is the first step to integrations, and all integrations are essentially about the data. Benjamin Wilson, product line technical director at Raytheon, has handled some very complicated and difficult integration projects where the data was not only moving quickly, it was classified.

“One of the efforts my team led [was to] integrate 40 different systems to a common architecture,” he said. “We had to balance the blend of open architecture security and performance. When we got everything into that standard, we were able to use higher-level reasoning on it. The power is there once everything is normalized into one infrastructure. We were able to show integration of systems that were never integrated before and provide countries with new capabilities.”

Wilson and his team were able to complete these battlefield systems integrations because they were also able to establish a chain of trust between the systems, thus ensuring the data flowing through those systems was reliable and not coming from an enemy.

Data tricks
While there are numerous ways to scale data out to meet demands, there are also some tricks and tips that can help put off the all-important moment when your data simply has to be scaled out in place.

Marie Goodell, vice president of SAP HANA platform marketing, said that the SAP HANA platform offers one way to put existing infrastructure to work without requiring a complete storage overhaul.

“Many customers will stand up an environment and have a second system,” she said. “We’ve had that for quite a while now, but we can now make the secondary system read enabled. This enables customers at the end of a quarter close, when you’re running a big massive report, you can use the secondary system with the primary, simplifying your IT and using resources more effectively.”

Bret Taylor, CEO and founder of Quip, has another approach. His company offers Web-based productivity tools, spreadsheets and word processors. While these tools are collaborative in real time, Quip didn’t go down the NoSQL route to make it happen.

“I do think there’s a lot of innovation in data storage,” said Taylor. “We don’t necessarily want the data storage to be where we innovate. We want it reliably and fast and easy to horizontally scale. How much weight do we put on the features of our data store? We put very little.”

Taylor said he does, “like putting a lot of that logic at the application layer. It makes it easy to do things like have data exist in multiple data centers. We have a model where mobile and desktop applications synchronize with the server and work offline. You have a uniform view of data on the client and server if you have a simple data model.”

Taylor said developers should “pick a data store for those features and their application. Notably, our documents are broken up into smaller atomic units. What’s unique about Quip is if you have four people editing the same spreadsheet, they’re only touching a small segment of the document. We represented that atomically. It leads to a lot of efficiency. We have a unique model that looks less like a document, even though it manifests itself as a document. It’s something we consider, architecturally, one of our best decisions, but it was no knock on Mongo.”

Thus, while the rest of the world builds distributed NoSQL applications and replicates those databases with something like Paxos to handle conflicts, Quip foregoes this route entirely and sticks with good old MySQL. Each edit is added as a tiny cell of information to the SQL database, and thus, these small edits can be propagated much more quickly than if an entire document was taking up a single entry in the database.

“It was based on a discussion that we wanted ad hoc queries that supported real-time co-authoring, but also supported offline editing,” said Taylor. “Those are seemingly contradictory. How do you synchronize that? One of our goals from very early on was to structurally reduce the occurrence of conflicts. How do we break our product into as small an atomic unit as possible?”

Chris Villinger, vice president of business development and marketing at ScaleOut Software, said that the amount of data being created and stored by enterprises is making it challenging to keep data models simple, however.

“Just the sheer volume of these new real-time streams, there’s way more out there than is being processed and analyzed,” he said. “Factories are getting more in tune to downtimes. We work with an industrial manufacturer for processed cheeses. There’s so much raw material in these vats that if a pump broke down, they could throw out US$50,000 to $100,000 worth of raw materials that would gunk up the machines. They get machine telemetry so they can predict when a machine is about to go down and do preventative maintenance.

“We see a lot more of that becoming more mainstream. A lot of these technologies were the purview of big enterprise systems. ‘Complex’ event processing, for example. That’s falling out of favor because no one wants to buy something with ‘complex’ in it. It’s becoming event-streaming data. You can deploy a streaming system where more traditional enterprise projects would be spending millions for Oracle. These specialized large enterprise software options are coming down in complexity. There are simpler alternatives, but for the first time, open source is driving at the other end of the spectrum. It’s much less feature-rich, but it’s good enough.

“In the e-commerce space, if you look at all the various verticals, where things get very interesting is in the smart factory. With industrial IoT, I think there’ll be a huge breakthrough there. Those industries are very risk-averse and stodgy. They don’t pick up innovative new technology at first adoption. That industry is going to be a lot more slower-moving.”

Hadooping around
Mike Gualtieri, vice president and principal analyst at Forrester Research, said that many companies are currently rethinking their data strategy, whether that means moving away from expensive data warehouses and onto a Hadoop-based data lake, or it means moving to NoSQL or document stores. No matter what the reason or vector, Gualtieri said many companies are already knee-deep this transition.

“I think companies are still going full speed ahead with this,” he said. “I think there are market forces trying to find the diff between Hadoop and Spark. A lot of Hadoop distributions can now support different file systems than HDFS. It’s also a processing architecture as well. I call it ‘Hadoop and friends.’ It’s not just the core Hadoop. What is the cheaper and better way to build a data lake? There is no cheaper and better lake.”

Gualtieri said he’s not really seeing a trough of disillusionment with Hadoop, though other analyst firms and media outlets have hinted at such a trough. He said that Hadoop’s value proposition is still unique in the marketplace.

“If there’s any disillusionment, it’s been caused by errant expectations that somehow magically you just put data into Hadoop and people can just make sense of it. That’s why there’s a lot of talk about integration vendors like Talend, and data preparation tools like Trifacta. Those are specifically geared to make sense of that data,” said Gualtieri.

“One of the easy things about Hadoop is it’s pretty easy to get data into the data lake. I know a very large insurer building a data lake, and they have 100 sources internally, and 30 external. It’s pretty easy to get them in. That’s not the problem. The problem is understanding what’s actually there and how to link it together. You can imagine if you have 130 sources of data potentially having some value to it, so we’re right back to the point where now we need metadata and data modeling.”

Gualtieri said SQL on Hadoop is now standard on most platforms, which means that the real problem with Hadoop is even easier to encounter. “One of the issues with Hadoop and doing queries like that is concurrency,” he said. “It’s 500 users versus a few users. The traditional data warehouses are designed for both fast queries and concurrency, and they do that with hardware and software combined. What enterprise customers desire is to bring it to Hadoop and do all the analysis there.”

When it comes to these new methods of data storage and management, Gualtieri said that “There’s absolutely a new need for metadata and data cleansing. It doesn’t matter where it is; there’s the need for that. Companies are trying to make that easier, using machine learning behind the scenes to clean up data and establish links between the data. Is it going to be a world where all the data is in one giant database? No. The idea of an information fabric or data virtualization layer is very valid; [it’s] more valid for application development.”

Finally, Gualtieri had some holistic advice for data practitioners: “Don’t be a purist about it. Don’t normalize the data in an academic way just to normalize it. Don’t build a model for all the data sources; build a base model. Make a minimum viable product for data models: The same approach developers apply to software development should be applied to data models. Because of Hadoop and the economics of it, you can give data scientists and business analysts much more latitude to build off that. In a data warehouse environment, everything was about a shortage because it cost so much. With the economics of Hadoop, it’s not a shortage. Compared to a warehouse, the data is almost free.”