In the modern economy, every business is a software business. Why?  Because companies have figured out that the easiest way to introduce innovation into a marketplace is with software, whose defining characteristic is that, well, it’s soft.  Instead of waiting months to make a change to some physical piece of equipment, software can be distributed quickly and widely to smartphone users, programmable manufacturing equipment, and a wide variety of other destinations. This is why you can Google just about any company of any size and find they have openings for software engineers.

But if you’re spending all that money on developer salaries, how do you maximize the amount of innovation you can get out of them?  It turns out, iterations are the currency of software innovation. 

It’s all about the at-bats
Venture capitalists are in the business of finding innovation and most of them will tell you that for every 10 companies they invest in, they are happy if one hits it big.  Among the things that public cloud did for the VC community is let them take more swings of the bat by funding more companies at a lower capitalization because those startups could begin without having to purchase hardware.  More at-bats, to continue the baseball analogy, resulted in more innovation. 

Applying that same hit percentage to software development, companies have a 10% chance of any release containing an innovation that will stick with its intended audience, so is it better to have four chances at innovation a year with quarterly releases, 12 chances with monthly releases, or 52 chances with weekly releases?  The strategic answer is obvious.  More releases, more iterations of software, produces more chances at innovation.  Tactically, though, how do you do that?

Maximizing iterations: From monoliths to microservices
In the early 1990s when most software was running in data centers on physical hardware, iteration speed was trumped by risk mitigation.  Back then, those physical servers had to be treated like scarce resources. That’s  because they were the only way to make a unit of compute accessible to run a software stack, and to replace that unit of compute took months.  Components of a monolithic application most often communicated with each other within the same memory space or over client/server connections using custom protocols.  All the pieces were typically moved together into production to avoid as much risk as possible, but the side effect of that was that if one component had issues, the entire application had to be backed out, which limited iteration speed further.

But virtual machines can be created in minutes and containers can be created in seconds, which changed the way that developers thought about application components.  Instead of relying on in-memory or custom protocol communication, if each component had an HTTP-based API it could act as a contract between the components.  As long as that contract didn’t change, the components could be released independent of one another. Further, if every component could sit behind its own load balancer it could also be scaled independently, in addition to taking advantage of rolling deployments where old instances of components are removed from behind the load balancer as new ones are injected.

These are the modern tenets of a microservices-based architecture, which are more loosely coupled thanks to those API contracts than their monolithic predecessors, enabling faster iterations.

Kubernetes is a big deal, and so is serverless
But now if you have hundreds or thousands of containers to manage for all these microservices, you need a way to distribute them across different physical or virtual hosts, figure out naming and scheduling, and improve networking because different components might be on the same host, negating the need for packets to go out to the network card.  This is why Kubernetes is such a big deal and why Google (through GKE), AWS (through EKS), and Cisco (through CCP), among others, are so bought into the container clustering platform.  And again, it’s all in the name of iterations, so that development teams can more loosely couple their components and release them faster as a way of finding innovation.

 But what’s next?  The big deal over serverless architectures is that they could be the next step in this evolution.  Instead of coupling components via API contracts, serverless functions are tied together through event gateways. Instead of having multiple instances of a component sitting behind a load balancer, functions sit on disk until an event triggers them into action.  This requires a far more stateless approach to building the logic inside the individual functions but is an even looser coupling than microservices with potentially better underlying physical server utilization, since the functions are at rest on disk until necessary. 

The bottom line
The bottom line is that the best way to find a good idea is to iterate through ideas quickly and discard the bad ones once you’ve tried them.  This concept is driving application architecture, container clustering platforms, and serverless approaches in an attempt to remove as much friction from the software development and release processes as possible.  The potential innovation gains from maximizing iterations are what just about every company is chasing these days and it’s all because iterations are the currency of software innovation.