Clayton Christensen called it “disruptive innovation.” Lester Thurow called it “punctuated equilibrium.” Thomas Kuhn called it a “paradigm shift.” Whatever you call it, you know it when you see it: Worlds turned upside down by fundamentally new innovations that render yesterday’s truths suddenly false.
These events tend to favor new entrants at the expense of incumbents who are often found clinging in vain to the precepts of faltering beliefs. Others get on board by dint of courage, wisdom or—perhaps most often—the pragmatic view that martyrdom and willful ignorance rarely end well.
This pattern is playing out today in the world of enterprise IT, where a bookseller has turned traditional notions of IT utterly on its ear. (Oh, one more truth about these disruptions: They tend to sneak up on you from unexpected quarters!)
Public cloud services such as Amazon EC2 have fundamentally changed the performance and economic expectations of modern IT organizations. IT used to be your cable company: Slow, bureaucratic, often inefficient and not terribly service-minded. That’s because business lines had no choice but to wait.
At the same time, IT operating budgets that were once expected grow are doing just the opposite: IT is now expected to do more with less.
Along comes Amazon and it’s “16 digits to freedom,” according to Rodrigo Flores, CTO of newScale. It’s the swipe of the credit card and the release valve for pent-up demand that allow applications and workloads to escape to the cloud.
It’s hard to overstate what this all means for enterprise IT. The question echoes through the hallways of most any data center today: “Amazon can do it. Why can’t you?”
It’s the essence of this question that’s forcing a transformation in IT delivery models. This transformation will approximate four progressive phases:
1. Virtualization. The hypervisor is notable as a non-disruptive disruption because it has returned disruptive value to IT organizations without turning their worlds upside down.
It’s a simple and seductive proposition: Consolidate hardware by improving the utilization of server capacity by 4X. Then create server pools that enable these virtualized resources to be shared, and you have something that looks like a fungible utility. Very nice!
There is, of course, a dirty little secret to virtualization: It’s a CapEx boom and an OpEx bust. As capacity is made available, it’s quickly filled in by pent-up demand. Like nature, IT abhors a vacuum.
2. Automation. A clean desk gets cluttered again. When storage space is made available, it’s quickly filled up. Virtualization is an OpEx bust because new capacity and newly met demand means more software systems to be constructed, deployed and maintained. Today, this is handled through a combination of elbow grease, an eccentric assortment of scripts, duct tape, bailing wire, and perhaps a little static electricity.
Of course, this sort of Rube Goldberg contraption is insufficient for modern IT. IT must industrialize processes with deep and declarative approaches to automation. Building, testing, deploying, scaling and changing systems should be fully automated and dynamically executed, predictably and consistently. Anything short of this will lead to gaps in performance and economy that will render the IT option the choice not taken as workloads happily flutter off to the cloud.
3. Standardization. Henry Ford famously declared that the Model T could come in any color you want, so long as it was black. Not the most consumer-friendly sentiment, but he understood that standardization was at the very heart of scale economics.
The same principle applies to enterprise IT, where hardware and software infrastructure must be consolidated and standardized. In many cases, full standardization will be the wrong answer, creating the unintended side-effect of putting a straightjacket on business lines and development groups who value diversity to enable their application innovations.
(What do you mean you’ll only deploy a Java stack?)
That’s why the right approach to standardization looks more like Michael Dell than Henry Ford: Mass customization. At the heart of mass customization: Deep automation, control and reuse of standardized components across the software supply chain.
4. Optimization. The final phase of this transformation is a transformation in the role of IT itself. Today, IT organizations are often vertically integrated bureaucracies that have lost sight of what is core to the business. According to analysts, some 80% of IT investment has no direct impact on business value.
That’s why, in the future, IT will have to “rise from the muck,” getting out from under the weight of layers of infrastructure and process complexity to focus on what really matters to business: the application! Amazon is an application-centric operating model. That’s exactly what enterprise IT must become.
At the same time, IT must throw out the vertically integrated playbook and become more like a sourcing integrator—not unlike Dell. In this model, IT will shift its focus from operations to optimization, managing and dynamically allocating workloads across a portfolio of internal and external, physical, virtual, and cloud-based IT resources.
To achieve this goal, applications must be decoupled from infrastructure as self-contained systems that can be deployed and retargeted on demand between internal and external compute environments. Workloads will move like liquid. As this occurs, IT can shift the focus from infrastructure to applications.
Perhaps we’ve lost sight of the fact that infrastructure only exists to serve the needs of applications, which is the only thing that moves the needle on the business side. So, if you ever find yourself looking for clarity in the face of change—the answer to where modern IT is headed—look no further than a little bookseller in Seattle.
And take a page from that book.
Jake Sorofman is chief marketing officer of rPath, which sells system automation software.