_

Because cloud computing depends on utility access to compute cycles, components and tools, it’s arguable that you needn’t change much about your process in order to start coding.

“I think, naturally, people will assume whatever life-cycle approach they have in place would be applied to the cloud, but they could be missing a big opportunity,” said John Rhoton, author of “Cloud Computing Architected” and “Cloud Computing Explained: Implementation Handbook for Enterprises.”

“You have the opportunity to completely revisit your testing processes or deployment approach,” he added.

But building on cloud platforms is more than just a timesaver. With a future so bright, developers run the risk of frittering efforts across platform silos without seeing a viable return. While a little free play is fine, it’s important to remember that elasticity should be an overarching conceit.

“You don’t need anything to get started; you can develop the next Google or Salesforce with no resources,” said Rhoton. “You’re getting all of this access, so presumably that’s what your goal is, to get a lot of use out of it.”

But above all, you’ll want to be agile. Life-cycle tools can help you squeeze the most out of each stage of application development.

Project and portfolio management
Thanks to an increasingly distributed, global workforce and the experience of working on modularized, often open-source projects, developers have become accustomed to how cloud application-building feels. So have those who seek to wrangle them: project managers and team leaders. That’s why agile project management and requirements discovery tools have been using the software-as-a-service model for several years now.

As a reaction to the heavy waterfall processes of the past, the extreme programming or lean concepts of iterative prototyping rather than design-first were ideal for the SaaS business model. If you’re looking for tools in this space, AccuRev, Rally and ThoughtWorks Mingle come to mind for agile, developer-led teams.

At a higher level, portfolio management and governance concerns can be managed with SaaS ALM offerings that tap not only into development metrics such as successful builds, tests, releases or defect rates, but also enterprise financials, human resources, governance, risk and compliance systems. CA Clarity PPM is an example of an existing portfolio management product that has added applications to the Force.com platform and the Salesforce.com AppExchange.

Design and architecture
No matter how you gather or discover your requirements, the cloud opportunity means you’ll want to consider integration from the get-go. “If you want to get the most out of this cloud ecosystem, then the more hooks you can create so you build an ecosystem, the more opportunities you’ll see to monetize it,” said Rhoton. Take care, however. Twitter, for example, has buckled numerous times under the weight of third-party applications using its APIs.

This also represents a major hiring opportunity. Job listings abound looking for SaaS architects to morph physical software products into services. Once again, the service-oriented architecture is key, Rhoton claimed. “SOA makes it a lot easier to selectively expose services, making it possible for people to build on your platform. But doing integration points becomes really complex, because you can’t have every other module running on developer machines. You have to put stubs there for external modules and do subsequent testing once you’ve staged the application.”

But even before the elasticity or integration questions, you’ll need an even bigger-picture view. “The macro mistake that I primarily see is that organizations don’t do enough planning, so everything becomes a tactical one-off project: This is the Amazon project, this is the Salesforce project,” said David Linthicum, author of “Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide.”

Developers and architects must think about how cloud applications will work and play well with their other systems, he advised. Two common problems he sees are, “Approaching with a technology-first attitude and picking what’s popular. There’s a tendency to function in herds. If everyone’s going to Amazon, they’re going to go there, but that’s not always a fit for everybody.”

Development tools
So now you’re thinking about actually building something. Though the market may be smaller than Salesforce’s SaaS or Amazon’s IaaS, this is where you’ll find Microsoft and IBM on equal footing with Google App Engine, Heroku and Cloud Foundry, among many others.

Redmond’s big advantage? The massive installed base of Visual Studio. Turning a .NET app into one that runs on Azure is easy: You simulate locally, then deploy a cloud services package and a configuration file. Azure applications consist of Web roles (essentially .NET Web applications, such as MVC or Web Forms) and worker roles (which handle number crunching and logic). Runtime, middleware, OS, virtualization, servers, storage and networking are Azure’s responsibility.

Similarly, IBM’s Smart Business Development and Test Cloud lets developers serve themselves a panoply of services, including IBM Rational Software Delivery Services for Cloud Computing. {http://www.internet.com/IBM_Cloud/Link/43274} True to form, there’s support for Eclipse, Linux, Java and Java EE, and clients can work with their own images as well as those from IBM. Using these tools, Big Blue claims, can reduce capital and licensing expenses by up to 75%, operating and labor costs up to 50%, development and testing setup time from weeks to minutes, and defects by up to 30% (via “enhanced modeling”). It also can “accelerate cloud computing initiatives with IBM CloudBurst implemented through QuickStart services.”

Testing in the cloud
Perhaps one of the most promising life-cycle angles is in testing, Rhoton observed. “Your options for testing are dramatically improved in a cloud-based environment. If you need 10,000 instances for half an hour, you can get them. Or you can also have 15 versions running at the same time, or do A-B testing.”

Existing Web application testing tools such as Selenium have already established dominance in this space and are now seeing their role expanded to include cloud testing. Compuware Gomez is another leader in the application performance testing space, claiming to provide a unified view across the entire application delivery chain, from the browser or mobile device to the cloud/data center. Ultimately, testing a SaaS application isn’t terribly different from one that is installed on-premise, but the laundry list should include functional, multi-platform, load and stress, remote access, security, live deployment, disaster recovery or rollback, and language internationalization.

An extremely promising area for cloud computing is collective intelligence. In “Digital Forensics for Network, Internet, and Cloud Computing,” author Terrence Lillard proposed multiple scenarios where the cloud will provide insight into user behavior, best practices and performance. “A user can ask, ‘If I add 4GB to my server, running the loads I am running, what will be the performance gains?’ In a fully populated collective intelligence system, a number of computers with the same configuration as the one in question will be examined both with and without the additional 4GB to determine the gain. Thus, the person asking the question gets a precise answer in a few seconds without the need for trial and error, extensive resource, or use of an expensive consultant. The user can then ask, ‘What if I add 16GB?’ and review the results again until they have the optimized number for their budget and needs.”

The future of DevOps
Just as every new abstraction is predicted to eliminate the need for software developers, utility computing has recently been posited as a permanent replacement for IT and operations staff. An April 2011 Forrester report, “Augment DevOps With NoOps,” aimed the crosshairs at the DevOps movement, an optimization of continuous deployment that is itself an outcome of agile development.

DevOps seeks to bridge the gap between developers and operations professionals, establishing communication from the requirements-gathering phase through capacity planning, testing, configuration, deployment and maintenance. Increasingly, entities such as Flickr are providing an attention-deficient user base with an unending variety of feature releases. That’s an extreme example, but the principle of DevOps remains valuable for any organization seeking a smoother “last mile” of agile development.

“As you continue to embrace and expand your use of cloud services, also start your transition from DevOps to NoOps,” said Forrester. “In NoOps, the collaboration between dev and ops is at its strongest because the collaboration is in the planning and engineering of a service life cycle. Because of automation, standardization and self-service capabilities, collaboration focused on manual hand-offs will dwindle as collaboration in serving business priorities grows.” However, DevOps should be seen as the evolution of release management.

Not surprisingly, DevOps proponents aren’t throwing in the towel quite yet. In a recent interview, Jez Humble, author of “Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation,” claimed the movement to the cloud is “entirely consonant with DevOps. Development teams will have to learn how to create and manage production-ready systems, and learn how to continuously deliver functionality. That requires excellent automated testing at all levels, and the application of patterns such as branch-by-abstraction, feature bits, and a production-immune system.”

Ultimately, according to Humble, multidisciplinary team culture and software craftsmanship emphasis trumps cloud automation. Nor, he said, can anyone afford not to “focus relentlessly on our users, which is where the lean startup people are doing some interesting and important work.”

Retiring applications
But wait, the life cycle’s not over yet! One thing remains: What to do with legacy applications. One company’s answer? Put them in the cloud!

RainStor, for example, takes a SaaS approach that “allows companies to instantly retire legacy applications and store the historical data in the cloud,” according to the company’s website. Implementation starts by compressing the structured data, then sending it to the cloud using the RSLoader client-side module. Amazon’s secure storage cloud (S3) or a similar storage cloud holds the encrypted data in RSContainers, while the RSGateway allows data queries via standard reporting or business integration tools over ODBC or JDBC.

As for cloud applications themselves, they too, in a not-too-distant future, will become legacy applications.

“Somewhat related to the ease of launching is the fact that it tends to produce application sprawl,” said Rhoton. “No one has a clear sense of what these apps are and who’s using them. That goes back to proper service management.”

Retiring legacy cloud apps can be as simple as turning off the running instances, but questions may remain about the permanence of stored data. Cloud vendors will likely store the data at moveable locations and multiple sites. Data could include sensitive customer, machine or account information, but cloud service providers use garbage collection, marking data for deletion often months prior to actual erasure or overwriting.

Not surprisingly, regulated industries aren’t thrilled with the imprecision of cloud data deletion.

There’s also the risk of data lock-in, but the Storage Networking Industry Association’s Cloud Data Management Interface API is slated to help: It would let cloud vendors migrate customer data to other vendors, rather than the current approach, which is to return the data to the customer. It’s also likely new regulations will crop up around data deletion. But questions of who sees data and what is done to protect it from prying hackers bring up the bigger picture: security.

Application-level security
Possibly the biggest perceived threat in cloud computing is security, but so far, that’s proved not to be the case. From an infrastructure and transmission standpoint, security policies created and enforced by a cloud vendor are likely to be more state-of-the-art than an on-premise installation. The risk of data loss via stolen or compromised physical equipment is arguably less with a cloud vendor. Further, emphasis on using private or vertical clouds for sensitive information, regulated industries or government secrets is likely to quell some fears. But there are still holes to be plugged.

“Security is becoming increasingly important. You need to have a very carefully architected authorization system in place that deals with issues of scaling, multi-tenancy, data isolation and identity management,” said Rhoton. “You may have security considerations that you might not have had before: Data could be accessible from outside organizations, competitors or hackers. Or you may have compliance issues for secure credit card payment or privacy regulations.

“The cloud provider can’t do everything. You’re the only one able to ensure that. Encrypt your data and make sure the keys you’re using are managed safely.”

Indeed, one type of key has received little attention: API keys to access cloud services via simple REST interfaces. As Mark O’Neill, CTO of Vordel (a developer of Web service products), said, “If an organization condones the casual management of API keys, they are at risk of, one, unauthorized individuals using the keys to access confidential information; and two, the possibility of huge credit-card bills for unapproved access to pay-as-you-use cloud services.”

Just as you protect passwords and private keys, API keys “should not be stored as files on the file system, or baked into non-obfuscated applications that can be analyzed relatively easily,” he said. The best place to keep it may be in encrypted form on a hardware security module.

It bears repeating: There are inherent risks in multi-tenancy, including identity and access management, shared hardware, remote storage, data at rest and in transit, and search and access logs retained by providers. These risks also vary depending on the abstraction level of the cloud: IaaS allows customers to manage operating system-level security controls and processes, while the provider implements security at the hypervisor and network levels. SaaS, at the other extreme, requires developers to trust the vendors’ security methods completely, since there is little that remains in their control.

One risk that often escapes notice, however, is that of speed to market: The ability to launch a working application or prototype in minutes often results in little time spent analyzing or programming security controls. These include classic concepts such as fail-safe defaults, the principle of least privilege, and complete mediation of access to all objects.

“The days of systems depending on firewalls for security are over,” said Linthicum. “You have to make sure to tier your way through it. That includes encryption, federated identity management built into the governance system, and very sophisticated firewalls.”

It’s inevitable that browser-to-server communication will leak information to eavesdroppers via side-channel features such as packet length and timing, “even if the traffic is entirely encrypted,” according to Xiaofeng Wang, director of the Center for Security Informatics at Indiana University. Sidebuster is a tool designed for detecting just this scenario.

And there are many others: IBM’s Dev and Test cloud has numerous security partners, including Navajo Systems (“the world’s first cloud encryption gateway”) and Silanis for e-signature process management. PingConnect offers on-demand single sign-on (SSO) for SaaS, a cloud service that automates Internet user account management for Salesforce, Google Apps, Concur, SuccessFactors, Workday, and other SaaS providers, eliminating passwords and redundant administration while blocking unauthorized access.

Meanwhile, the Cloud Security Alliance started this summer to define security-as-a-service and provide guidance on reasonable implementation practices.

Always on(?) service-level agreements
As with security, the utility computing model depends heavily on assurances from vendors that the lights won’t go out unexpectedly. Of course, they will.

The April 21, 2011 Amazon Web Services outage provided a perfect example of what vendors can do to remain reliable, and what customers can expect in the event of an outage, including how to keep business systems running, à la Netflix. The company understood that the 99.95% uptime guarantee explicitly meant 0.05% of downtime, and it planned accordingly by including its infamous Chaos Monkey. Netflix was able to maintain 100% availability of its video streaming services hosted on AWS during the outage thanks to good design and dependable replication.

But developers also have a right to expect transparency from vendors. That’s been a sticking point with data centers on occasion, but here again Amazon is leading the pack with increased communication to customers and reimbursements beyond those specified in the SLA.

Though 20/20 hindsight may be at play here, some critics have asked for even more pointers and updates when such events occur, including information like references to when and where network events have happened and whether they’ve been contained; specific lists of services or APIs affected; and volume IDs and performance statistics that would help customers evaluate their own responses and design choices for future improvements.

That transparency seems to be increasing as cloud computing becomes ever more viable, according to Rhoton. “If you went to cloud conferences a year ago, all you’d see is vendors talking to each other. There weren’t many customers, and of those who were doing cloud computing, many were considering it their secret information,” he said.

“Now we’re hearing more about specific implementations, but we’re still in the early adoption phase.”