Bringing together Big Data and big business is much more work for developers than they may anticipate. At the Big Data Technology Conference in San Francisco yesterday, representatives of both the Big Data world and the big business world mingled to discuss their successes and failures in the field.
One company on hand was MetaScale, a wholly owned subsidiary of Sears Holdings. The company began as Sears’ internal Hadoop development and deployment team, but has now been unshackled from the retailer in order to help other businesses implement Big Data infrastructure and processes.
(Big Data means big money: Big Data spending to reach $114 billion in 2018)
Ankur Gupta, who heads sales and marketing at MetaScale, said that Sears began experimenting with Hadoop four years ago, giving the company a leg up on many other enterprises when it comes to Big Data processing maturity.
“We saw a business opportunity, and we thought we could provide an enterprise-based overview that’s vendor neutral and platform agnostic,” he said. “So we formed MetaScale to help other companies accelerate their Big Data initiatives, so they don’t make the same mistakes we made.
“We help companies get to production faster than they would on their own. We provide a vendor-neutral perspective. No matter if you’re an IBM shop, and HP shop or a Teradata shop, we can provide from our experiences what may and may not work for you. Similarly, with Hadoop, we have experiences in all the different distribution providers.”
Andy McNalis, Hadoop infrastructure manager at MetaScale, explained some of the customizations to its Hadoop cluster architecture during his talk, titled “Running, Managing, and Operating Hadoop at Sears.”
He said that typically, within the Sears Hadoop cluster, each machine is a simple non-redundant machine, with a single power supply and a single 4TB hard drive. The cluster’s Name Node, however, is a more robust and redundantly equipped box, capable of handling heavier workloads and remaining intact.
McNalis also said Sears uses a second, backup Name Node server, which is not necessarily hot-swappable. Instead, this second Name Node is purely used to back up the metadata of the cluster, allowing for that data to remain in place if the primary Name Node is lost (thus also losing the storage directory).
McNalis said the Sears project has grown significantly since it was started almost four years ago. “We started off with some really tiny clusters, just playing around. We built our first cluster with 50 data nodes. Today, I’m at 485 data nodes in that same cluster. You can just add the data nodes into the cluster. You don’t have to take an outage,” he said.
“On Hadoop, I’m at a point now where we add entire racks of servers at a time, and there’s no outage, and Hadoop starts using the new machines automatically.”
#!
Predicting a change in data
Elsewhere at the Big Data Technology Conference, Precog founder John De Goes discussed methods for embedding predictive analytics into databases. He prefaced this discussion by initially admitting that SQL was ill-suited to such a task, particularly because it cannot handle ordered lists, also known as arrays.
To illustrate this fact, he discussed the process of implementing the k-means algorithm on data. (K-means is a common algorithm used to group like data into clusters, or graph-like structures.) To implement k-means on a relational database, he said, the first thing you have to do is figure out how many clusters of data you’re going to have at the end—a tricky guessing game.
(A look at the Big Data options: Big tools for Big Data)
Next, you’ll have to write an external driver program that itself generated the SQL needed to massage the data into a format that k-means can work with. As a result, the process gets very difficult and complicated very quickly, said De Goes.
But modern solutions to this problem are available. One such solution he referenced was MADlib, an open-source library of database extensions that automate a great deal of the boilerplate code needed to write predictive type applications. For example, k-means is implemented within MADlib, and can be triggered with just a few lines of code, he said.
De Goes went on to detail a number of other solutions to the limitations of SQL, such as MonetDB and Rasdaman, both of which approach the problem from allowing the database to support arrays and other important data-access models.
Perhaps the most interesting of his suggestions was the Datalog programming language. A relative of Prolog, Datalog allows for expressive descriptions of data transformation and relationship tasks. However, Datalog is not without its problems.
“In theory, Datalog is a good fit for many kinds of predictive analytics,” said De Goes. “The problem with it is that you can’t just start writing Datalog into your database because no mainstream database supports that. Supporting that would require changes to the inside of the database. SQL is not designed for that. Also, there’s no real distributed implementation of Datalog with the full set of features yet.”