We’ve all heard of Apache HTTP Server and Apache Struts, but what we haven’t heard about could become game-changers in their own right. Here’s what you may have missed so far:
This open-source implementation of the OASIS Content Management Interoperability Services was just promoted to a top-level project. From the OASIS standard:
“The Content Management Interoperability Services (CMIS) standard defines a domain model and Web Services and RESTful AtomPub bindings that can be used by applications to work with one or more Content Management repositories/systems.”
The CMIS interface is designed to be layered on top of existing content management systems and their existing programmatic interfaces. It is not intended to prescribe how specific features should be implemented within those CM systems, nor to exhaustively expose all of the CM system’s capabilities through the CMIS interfaces. Rather, it is intended to define a generic/universal set of capabilities provided by a CM system and a set of services for working with those capabilities.
Apache Chemistry’s most recent activity has centered around the release of the project and its client/server portions rewritten in various languages. While the server is in Java, the client-side of the Chemistry project has already been ported to .NET, PHP and Python.
#!
This is the Apache take on an XMPP server. In case you’re not familiar with the protocol, XMPP is the same instant messaging standard used by Google for GTalk. Most folks refer to XMPP as Jabber, and the world of Jabber has long been dominated by ejabberd, a chat server written in Erlang. As an Erlang program, ejabberd is famously scalable and has essentially cornered the XMPP market. Thus, Vysper takes another route.
While Vysper can be used as a standalone Jabber server, it’s more at home as an embeddable Jabber server. It’s also a lot more focused on the hosting of rooms and group chats rather than the run-of-the-mill person-to-person chatter that tends to exist within instant messaging realms. And finally, as XMPP is an excellent and verbose message exchange protocol, Vysper can also be used as a data pipe for sending information between applications on the Internet.
#!
Apache Tika is a toolkit for detecting and extracting metadata and structured text content from various documents using existing parser libraries. Tika began life as a sub-project of Apache Lucene, the open-source Java search engine. The project is very useful for expanding the capabilities of your existing search engine, and it has also become relevant to the Apache Hadoop project, where Tika can be used against unstructured data.
Tika was recently updated to version 0.9, and with this release came a host of major bug fixes that should make dealing with the project a lot easier for first-timers. Additionally, developers can now fork their parsing operations into separate processes, allowing Tika to spread its searches across a cluster or processor.
#!
From the project’s page on the Apache site: “Apache MINA is a network application framework which helps users develop high-performance and high-scalability network applications easily. It provides an abstract, event-driven, asynchronous API over various transports such as TCP/IP and UDP/IP via Java NIO.”
MINA is like a network engine for your applications. While most developers are familiar with opening sockets and encapsulating information into TCP/IP or UDP packets, the actual function of the application tends to be the focus, not the network code. With MINA, developers can just drop in some highly optimized and scalable code. One project that used MINA—the OpenLSD project (Open Legacy Storage Document)—found that MINA allowed it to pull down over 1,000 documents per second across the Internet.
MINA is so powerful and scalable that it is also used by some telecom companies in their low-level packet management systems.
#!
Hah! Of course you’ve heard of Hadoop by now. But after many lengthy discussions with numerous Java developers from around the world, I am not convinced that most developers “get” Hadoop. This open-source implementation of map/reduce is so much more than that. It’s also a distributed file system, a cluster management application and a batch job execution engine.
But what most Java developers don’t seem to get is that Hadoop is not meant to be super-fast, super-cool software. It’s just the underbelly of what developers have been writing by hand over the past 30 years. Many of the developers I speak to shrug at the mention of Hadoop: “It’s not that cool. It’s just map/reduce.”
What they are ignoring is the fact that, given Hadoop, 90% of the difficult coding is not needed. Why, the HDFS distributed file system alone is something I’ve seen numerous teams either write from scratch or simply use as an excuse for not even beginning work on a project.
But the real excitement here is no longer just Hadoop; it’s the numerous sub-projects, such as Avro, Hive, Pig and to a lesser extent, ZooKeeper. And don’t forget Mahout, the open-source machine-learning development library. Given a large enough Hadoop cluster, there is nothing one cannot do.