Software development for the cloud often involves coding against Platform as a Service (PaaS) services provided in the cloud. These PaaS services often are provided in tandem with Software as a Service (SaaS) websites, with Salesforce’s Force.com being a well-known example. But how can you leverage these PaaS services without becoming tripped up by security and service management?

The idea of using Web-based APIs is not a new one. In the past, we would have thought of it as screen-scraping a website. This was the enabling technology behind early sites for comparing airline prices from multiple airline sites, or combining search results from multiple search engines.

The problem with screen-scraping is that website owners didn’t necessarily want their sites turning into an API. They didn’t want their data to be harvested, so they tried to stop it. However, early measures, such as limiting access by client IP address, were easily defeated by tools.

Another issue is that screen-scraping is brittle; a small change in the site’s look or feel could break the data access methods. That’s where the concept of the managed Web API was born.

Web APIs would allow developers to write code to access a website programmatically, using HTTP GETs and parameters within query strings, but in a managed manner that benefits both the client and the service provider. For the client, a standard interface enables applications to be written to a well-defined interface, safe in the knowledge that the API will not change unpredictably. For the provider, management of the API through rate limiting puts a virtual “circuit breaker” on the API usage, preventing overuse by a single client.

Web APIs are PaaS services that allow a developer to use the Web as a platform, creating an application from pieces of functionality sourced from the cloud. Service providers can monetize their services by putting a usage and pricing model into place.

The convention for managing Web APIs is to use an API key. Developers are given an API key (or in the case of Amazon, two keys), which are used for the identification and authentication of requests sent to the Web API. Sites providing APIs also provide snippets of code in various languages (such as PHP, Java or C#) that let developers use the keys.

This code handles the creation of a keyed-Hash Message Authentication Code (HMAC), which accompanies the request to the Web API. The HMAC serves two purposes: ensuring the integrity of the request to the Web API (ensuring the request has not been tampered with), and ensuring the authentication of the client sending the request. Authentication, therefore, is based on proof of possession of the API key.

Amazon has a different model in that it provides two keys. Readers familiar with Public Key Infrastructure (PKI) may assume that these two keys are public and private keys that are linked together as an asymmetric key pair. However, they are not public and private keys in the sense of RSA or DSA algorithms.

One key, called the Access Key ID, is used as an identifier, identifying the party that is accessing the Amazon service. It is similar in concept to a username, and it may be sent in unencrypted requests. Indeed, when the Amazon S3 cloud service is used for online storage, the Access Key ID forms part of the URL and may be recorded by Web infrastructure between the client and Amazon. Its main purpose is for identification, not authentication.

In Amazon’s model, the second Secret Access Key is used for message authentication. It is used to create a spell-out word (HMAC), which provides proof of possession of the Secret Access Key. The Secret Access Key can be thought of as a shared secret between Amazon.com and the developer who is using Amazon resources. By using the Secret Access Key to create the HMAC, the developer proves that they have access to the shared secret, and therefore has proof of possession.

Because usage of the cloud services is billed to the developer, it is vital that the Secret Access Key does not fall into the wrong hands. Otherwise, a large bill may be run up. If a developer suspects that a Secret Access Key has been accessed by a third party, a new Secret Access Key can be generated online.

On the face of it, it seems easy to create a Web API out of an existing website. A developer may look at the single API Key model, or even Amazon’s more complex two-key model, and think, “I could do that.” This is potentially a recipe for disaster. Let’s examine why…

Unless developers have security experience, they should be wary of rolling their own API Key-based management system for their Web API. Consider replay attacks for example. If a request has a valid HMAC and is properly formatted, it is let through.

But what if the exact same request is received a second time from a different sender. Will it also be let through? This second request may be a request recorded by a traffic sniffer running on a rogue wireless access point. Because API Key models do not use the back-and-forth handshaking of an authentication protocol such as SSL-based authentication, they are particularly vulnerable to replay attacks if not implemented carefully.

The important aspect is to include unique data in the HMAC that will change for each request, such as a timestamp. In that case, a message received with exactly the same timestamp will be under suspicion as a replay attack, and will be blocked. The client should then be notified regarding the unused key. In essence, blocking replay attacks puts extra work on the developer of the Web API.

Additionally, if a user loses an API Key, it is up to the service provider to issue a new key. Ideally this should be provided as an online service. However, this means yet another identifier (such as a password) in order to identity the client. Before a developer knows it, they are building a full identity management system.

Throttling is another aspect of Web API management that is easier said than done. It is performed in order to limit users to usage plans. Throttling is particularly important for the freemium model, which calls for general usage to be free, but capped at a particular rate. When users pay for usage, they are allowed higher rates of access.

The success of the freemium model requires an enforcement point in place allowing for different rate plans. While doing this, it must not be vulnerable to denial-of-service. Additionally, when usage limits must be tracked across a bank of servers, management and monitoring can quickly become complex.

Developers may choose to implement their own API Key model, enforce throttling, block replay attacks, and issue new keys to users whose keys are compromised. If the service is being offered commercially, then service management must include the creation, modification and management of standard usage plans, each with different features sets, limits and pricing rules.

By taking this approach, the developer may end up spending more time managing the delivery of the service versus creating the service itself. Rather than reinventing service management, one possible solution to this problem is to leverage an off-the-shelf technology to manage APIs. Technology for managing Web service usage has existed for many years, and has been proven in high-volume deployments that are in the line of fire of production usage on a constant basis. This can be a much better option than trying to reinvent service management and potentially running into security issues.

API Keys are now established as the standard mechanism to manage Web APIs available in the cloud. However, management of these Web APIs is not as straightforward as it seems, and can quickly tie up a lot of development time. This time would be better spent developing the service itself. By leveraging existing service management infrastructure, a full-featured Web API service delivery platform can be more efficiently deployed.

Mark O’Neill is CTO of Vordel, which sells products to manage cloud computing.