Hadoop has, for the most part, moved beyond the proof-of-concept phase and the initial chasm of adoption. More and more organizations are putting the open-source framework to work on mountains of complex Big Data. The next step in Hadoop’s evolution is getting a handle on governance.
To that end, Hortonworks—the enterprise data platform provider and open-source Apache Hadoop contributor—announced a new Data Governance Initiative (DGI) to design and implement a comprehensive, centralized approach to data governance. The initiative’s goals range from defining governance standards and protocols within Hadoop, to recreating real-time auditable and traceable data landscapes.
Ultimately, though, the DGI aims to tie together the rapidly growing ecosystem of Hadoop projects, from HBase, Hive and Pig to Drill, Mahout and ZooKeeper, with a flexible standardized system that serves as a foundation and a common conduit for Hadoop’s growth.
“Everybody has Big Data problems,” said Andrew Ahn, director of governance on Hortonworks’ product management team. “We really needed to tag data well. Just having a metadata tag or a business glossary isn’t that difficult to do in and of itself, but there really wasn’t a product that fit all needs. There is no one-size-fits-all solution; it just doesn’t exist. Oftentimes Hadoop is considered sort of black box. You have a lot of workflows, but once it enters this governance area, it kind of goes dark. There’s not a lot of visibility in there.”
A new governance paradigm
The initiative is also indicative of the new industries exploring the possibilities afforded by Hadoop and Big Data processing. The DGI’s contributors are not traditional technology partners; aside from Hortonworks, the initiative’s founding members include healthcare giant Aetna, pharmaceutical corporation Merck, and Target, one of the largest retail chains in the U.S.
“From a business standpoint, [governance] is a pain point that a lot of customers have, hence this consortium we’re building,” Ahn said. “We have a lot of industry players. You see one from retail, a multinational pharmaceutical company, a healthcare provider and SAS [Hortonworks’ technology partner, a business analytics software provider]. It really is a testament to where a lot of folks are in their Hadoop journey and the realization that governance isn’t something you can just tack on. You have to plan for it, and it has to be integral to your overall strategy.”
Hortonworks designed the DGI based on the use cases of its partners, building governance policies based on business taxonomies in what Ahn called a “happy meal” of data tags for organizations to deploy code immediately into production with the knowledge that they’re always in compliance.
In this way, Hortonworks and its partners see the DGI as a new paradigm for software development; with the open-source community ultimately building governance capabilities based on individual enterprise needs. Ahn pointed to a retail application as an example of creating a culture around governance.
“For retail, you might use PCI, the measure by which governance or compliance standards are set,” Ahn said. “Why not model your data based upon a PCI-based taxonomy from the very beginning? In doing so, it kind of becomes an industry standard so that you can interoperate with other industry actors or auditors all speaking the same language, the same commonly accepted practices and requirements of PCI data.”
This fundamental shift in Big Data governance style could be key to reducing barriers to Hadoop adoption across industries, according to Mike Gualtieri, principal analyst at Forrester Research.
“Applying old-school dictatorial-style governance to Hadoop would be a disaster for enterprises because it would tamp down the agility of Hadoop,” he said. “The Data Governance Initiative is intriguing because it brings the Hadoop community together with real enterprises. I hope they partner in a way that create ‘minimally viable governance.’ That would keep Hadoop flexible while giving enterprises the governance they need to prevent data pandemonium.”
The DGI’s core components
At its core, the DGI’s baseline requirements are broken down into five components to apply a centralized governance model across the Hadoop stack:
- Knowledge/Metadata store: A system for ingesting and storing business or technical metadata with the ability to integrate with any existing metadata tool.
- Apache Ranger integration: An access and authorization layer of Apache Ranger leveraged for data security; plug-in architecture provides policy enforcement points inside the Hadoop stack to prevent end runs.
- Apache Falcon integration: Handles the data life cycle—data movement, scheduling, management and pipelining.
- Audit store: A “bottomless pit,” according to Ahn, of audit logging and data stashing, with support for forthcoming immutable data type. It compiles and organizes evidence for governance and compliance scenarios.
- Rules-based policy engine: A flexible system to model any policy rule.
Back to open source
The road map for the Data Governance Initiative ultimately leads back to the Apache Incubator, and there it will be completed 100% in the open. While Hortonworks and the Apache Software Foundation haven’t yet settled on a name for the eventual Hadoop project, the organizations agreed they’ve struck a chord in the way to solve Hadoop’s data governance problem.
“Rather than building this gigantic Noah’s Ark, we build something that can morph and change into what you need,” said Ahn.
Once all the core components are built and the DGI reaches its incubator phase, the Apache community will begin contributing to the initiative not to replace existing tools, but to foster compliance in working with third parties. What’s unique about the DGI is that when it does find its way back into the Apache Incubator (projected sometime in the mid first quarter of 2015), its committers will be from a new generation of companies and industries integrating Hadoop technologies.
On top of immutable types, the contributors are working to add automated ingestion and tagging capabilities, enhancements to the policy engine, and Apache Hive schema lineage to the data governance initiative. In the near future, Hortonworks also has a DGI certification program rollout planned for around the time of the next Strata + Hadoop World conference in February.
“This is a next evolutionary step in Hadoop adoption,” said Ahn. “In Big Data, you have something that’s gone through a prototyping phase and a [proof of concept] phase, and now to be a first-class citizen you’ve got to play by the rules. This is the next level of maturity for us; it’s taking responsibility for governance.”