MapReduce is a cost-effective way to process “Big Data” consisting of terabytes, petabytes or even exabytes of data, although there are also good reasons to use MapReduce in much smaller applications.

MapReduce is essentially a framework for processing widely disparate data quickly without needing to set up a schema or structure beforehand. The need for MapReduce is being driven by the growing variety of information sources and data formats, including both unstructured data (e.g. freeform text, images and videos) and semi-structured data (e.g. log files, click streams and sensor data). It is also driven by the need for new processing and analytic paradigms that deliver more cost-effective solutions.

MapReduce was popularized by Google as a means to deal effectively with Big Data, and it is now part of an open-source solution called Hadoop.

A brief history of Hadoop
Hadoop was created in 2004 by Doug Cutting (who named the project after his son’s toy elephant). The project was inspired by Google’s MapReduce and Google File System papers that outlined the framework and distributed file system necessary for processing the enormous, distributed data sets involved in searching the Web. In 2006, Hadoop became a subproject of Lucene (a popular text-search library) at the Apache Software Foundation, and then its own top-level Apache project in 2008.

Hadoop provides an effective way to capture, organize, store, search, share, analyze and visualize disparate data sources (unstructured, semi-structured, etc.) across a large cluster of computers, and it is capable of scaling from tens to thousands of commodity servers, each offering local computation and storage resources.

The map and reduce functions
MapReduce is a software framework that includes a programming model for writing applications capable of processing vast amounts of distributed data in parallel on large clusters of servers. The “Map” function normally has a master node that reads the input file or files, partitions the data set into smaller subsets, and distributes the processing to worker nodes. The worker nodes can, in turn, further distribute the processing, creating a hierarchical tree structure. As such, the worker nodes can all work on much smaller portions of the problem in parallel.

During the “Reduce” function, the master node accepts the processed results from all worker nodes, and then combines, sorts and writes them to the final output file. This output can, optionally, become the input to additional MapReduce jobs that further process the data. It is this ability to process large data sets in manageable steps that makes MapReduce particularly appealing. Programmers also appreciate being able to write MapReduce jobs in any language, including C, Java, Perl, PHP or Python, which makes it easy to incorporate existing algorithms into the Hadoop environment.

Hadoop is a complete system centered on MapReduce at its core, with other functions in supporting roles. The most important is the Hadoop Distributed File System (HDFS), which serves as the primary storage system. HDFS replicates and distributes the blocks of source data to the compute nodes throughout a cluster to be analyzed by one or more applications.

Another critical element is the NameNode, which serves as a central repository for information about HDFS, making it the equivalent of a file directory. All HDFS metadata (e.g. namespace, block locations, etc.) is stored in memory on a single node designated as the primary NameNode. The NameNode typically stores the namespace information on both a local disk and a network-attached storage (NAS) system to serve as a backup, while block locations are maintained solely in memory.

The JobTracker is responsible for determining which servers will run which MapReduce jobs, and then tracking the progress and ultimate completion of each and every job. Normally a job is assigned to a server (the DataNode) that stores the data, or to a server in the same rack. The JobTracker then works with a TaskTracker function to monitor the separately assigned tasks.

MapReduce is suitable for virtually any data-intensive application. Sample applications, in addition to its obvious use in Web crawling and search engines, include data, traffic or log analytics; image or text processing; data mining, modeling or archiving; and machine learning.

A rich, growing ecosystem
The open-source nature of Apache Hadoop creates an ecosystem that facilitates constant advancements in its capabilities, performance, reliability and ease of use. These enhancements can be made by any individual or organization—a global community of contributors—and are then either added to the basic Apache library or made available in separate (often free) distributions.

Some of these distributions offer significant enhancements to the basic Apache version. One major enhancement made recently is the ability to use the industry standard Network File System (NFS) interface, with random read and write support, in addition to HDFS. With NFS, any remote client can simply mount the cluster, and application servers can then write their data and log files directly, rather than writing first to either direct or network-attached storage. Another major enhancement overcomes the lack of high availability and data protection capabilities in the basic Apache distribution by incorporating resilient distributed architectures, snapshots and mirroring that together make Hadoop much more suitable for production deployments.

There are also a wide variety of open-source modules that leverage the MapReduce framework, and different distributions of Hadoop now integrate, harden, test and validate various collections of these. Examples include:
•    Language access components (Hive and Pig)
•    Database components (HBase)
•    Workflow management libraries (Oozie)
•    Application building libraries (Mahout)
•    Complex data-processing workflow creation (Cascading)
•    SQL-to-Hadoop database import/export (Sqoop)

Finally, there are significant investments being made to expand the growing number of commercial offerings in the Hadoop market. Data analytic tools, development environments, and connectors to existing data-warehouse and business-intelligence solutions are all available today to better facilitate the use of MapReduce.

In short, MapReduce is a big deal. You owe it to yourself to learn more about how this powerful tool can best help you understand and unlock the value in your own Big Data.