I’m surely not the first person to point out the sudden interest in the cloud these days. If I were to believe everything I read, I’d be forced to conclude that our universe has just been popped into a computing space that consists of two endpoints: the cloud for the back end and mobile data for the front end. Desktops? Laptops? Local servers? Pshaww, passé!

This might indeed be how the world will shake out in a few years, but for the time being, we have to solve what’s here and now. The first point to make, then, is that much of the interest in clouds is directed toward internal clouds. IT departments are simply not going to ship their source code or data to some collection of resources hosted by the latest cloud startup. There is a very natural security aversion to doing this. What host, beyond possibly Salesforce, can establish its credentials in security and uptime at a level sufficient to convince IT organizations? So far, few companies have.

Security and reliability, however, are only part of the issue. An important additional concern is the difficulty in defining a significant benefit for IT organizations to host apps or data in a cloud outside the firewall. IT managers who grok the benefit of virtualization and clouds are happier running those platforms on so-called “private clouds,” that is, clouds within the firewall.

The argument in favor of the undifferentiated cloud that is frequently trotted out is the savings that are realized on hardware (the capex) and on management costs (the opex). The hole in these benefits is that hardware is inherently inexpensive today, and the management cost benefits are hard to capture. Certainly, provisioning and decommissioning machines is easier, but new management policies and skills must be learned.

For example, a frequent problem is the profusion of VM snapshots and templates. These are large files that are expensive to store and move around. They also offer comparatively little metadata to guide administration. Emergency management is no trivial matter either. If a cloud system goes down, determining which hardware item has actually failed and what its effects on other jobs will be is not easy.

The concern expressed in this problem is probably one of the biggest administrative and management headaches; namely, that diminished performance of one server can affect multiple unrelated applications that happen to be partially (or wholly) housed on this server in the cloud. The encountered problem can now seep into many applications.

By comparison, the traditional hardware approach inherently limits failures or problems to the server on which only a single application (generally) runs. I don’t want to belabor the point, but for the time being, it’s safe to say that we don’t know how much savings are actually realized in systems management by use of the cloud.

Public clouds, such as Amazon EC2, Google App Engine (GAE) and Microsoft Azure, bring with them constraints and problems of their own. GAE is a limited deployment environment. The Java environment imposes constraints on what can be done from Java (no threads, no sockets, no use of java.awt.Color, nor some NIO classes, nor several standard XML streaming classes). The Python environment requires that you use version 2.5.2 of the language, which was released in February 2008.