With projects like Rackspace’s OpenStack and VMware’s vCloud already accumulating code and requirements, it feels as though the race to treat the data center as if it were a single machine is off to a flying start. But there is still a great deal yet to be built, and the term “data center operating system” itself has not yet been around long enough to be a buzzword.

Data center operating systems were first discussed in public by VMware back in 2008, but the company quickly disposed of the term in favor of “cloud operating system.” That’s probably a more accurate way to describe the fundamental shift such systems bring to the data center, but at the end of the day, the term used is irrelevant; it’s the software that matters.

Already, companies like Eucalyptus, Nimbula and Rackspace are pushing their own variety of solutions for hosting cloudy systems in private data centers. Some would call this the beginning of a move to private clouds, but from the other side, it could also be seen as a move away from traditional data center management tools and static servers.

But what does this coming shift in the data center mean for developers? Chris Pinkham, cofounder and CEO of Nimbula, thinks that while this shift will be orchestrated by operations staff, it is the developer who will reap the rewards. He said that companies are already seeing the coming move to data center operating systems as a way to simplify deployment.

Nimbula is building its data center operating system on the assumption that companies need to deploy the same software around the world at disparate data centers. In most enterprises, that means deploying to a different environment at each data center.

Data centers have typically grown organically to meet needs. That means every roomful of servers is managed differently, and is likely built on uneven and inconsistent versions of software stacks. Deploying a single application to three different data centers can mean three different quality assurance, build and packaging cycles, and that means three times the work.

But what Eucalyptus, Nimbula, Rackspace and VMware are discovering is that offering a layer of services and provisioning on top of commodity hardware can make the deployment process easier for all involved, especially when that layer of software can, itself, be easily deployed around the globe. It certainly makes sense on paper, and according to Pinkham, it’s already making sense for preliminary customers.

And while the overall form of these new data center operating systems is relatively fluid at the moment, there is one certain fact all developers likely know: In the future, deploying applications will mean deploying an ISO-standard disk image or an Amazon Machine Image.

The data center operating system will push the desktop and server operating system into the category of build component. Thus, developers won’t just be accountable for the application, but also for the streamlined disc image they’ll be presenting as their deployable artifact.

The good news is that artifact will be one file, without the need for complicated dependencies to be littered around the server by hand. The bad news is that developers will need to be experts at slimming down and packaging commercial and open-source operating systems.

But is that really such a bad thing? Today, developers and operations have to coordinate as though they were unloading a cargo ship at port before deployment can take place. Each deployable object has to be tested and retested on specific versions of the underlying stack, and a single version mismatch can cause unexpected issues at launch.

Operations is typically given a request for a Windows server, an Ubuntu server, or a specific version of Solaris to be placed on a machine ahead of deployment. That means someone in operations spends his or her time installing and configuring that machine for a project he or she likely knows little about.

In the future, hopefully, this will not be the case. When every targeted environment is little more than a bare server with an ESX sticker on the front, developers and operations won’t have so much wiggle room between them. And as the data center operating system expands, configuration management, backup and replication can all become automated processes that can be easily replicated from one data center to the next.

It may all sound like Aldous Huxley is coming to the data center, but we’re not quite in this brave new world yet. Perhaps the single biggest impediment to the success of such data center operating systems is the fact that we’re still at the early stages of the race.

When version 1.0 of those various projects arrive, the real race will begin as users start to pick their favorites. This time, however, the developers should have a very strong say in who wins the race.