Can we really be one big happy family?

There is a reason “Ops” is part of DevOps. While it seems it has been ignored lately, some very recent and public disruptions to Day 1 application launches has forced it to the forefront of our IT discussions. And it’s about time.

After all, there is no sense in developing more applications better, faster, cheaper if they land in a brittle, incorrectly sized or unavailable infrastructure when thrown over the wall to Operations. That is like an airplane entering its descent to the tarmac only to find that all the gates are occupied, under construction, or too small.

A colleague was recently reminiscing about his data center days of racking and stacking servers, configuring routers and switches, and providing Development with the beefiest server in stock to host his company’s applications—without really knowing how much capacity the application needed to meet its SLA or customer-experience requirements.

Every day was a fire drill. He would hear things like:
• “We need more memory for the database server.”
• “We want to break the Web server up into two load balanced hosts.”
• “Do you have any extra servers lying around that we could build a test lab with?” (That was his favorite)

He had to say “yes” to everything because these applications were going into production and he didn’t want to be the bottleneck that held up a production release. But that’s exactly what could happen if Operations guesses the amount of CPU, memory, network and storage capacity for these applications. If infrastructure environments aren’t sized appropriately for peaks in workload, then bottlenecks will occur that will negatively impact the customer experience.

The common solution in the absence of information is to over-provision, but that can costs millions of dollars.

(Related: How Dev and Ops can get along)

So how can organizations reduce costs and risks, ensure service delivery, and provide an exceptional customer experience while continually right-sizing their application environments? They can turn to proactive capacity planning, the process of understanding infrastructure utilization to assess capacity efficiency and model growth across the entire data center. In this way, IT organizations can anticipate potential issues that could impact the customer experience and a company’s bottom line.

Think about a trading application for a brokerage firm where a second’s delay can cost fund managers millions of dollars for the business as well as their clients. Or a GPS application that receives real-time updates from law enforcement agencies tracking a missing child or a stolen vehicle. Infrastructure that is not sized appropriately to handle large workloads will cause major application response issues. And there is much more at stake here than just loss of revenue: Brand image as well as the health and welfare of the public are all application dependent.

#!Proactive capacity planning should not be considered a process—especially an “annual process”—that is adopted only for production environments. Development should model their own code, observe the impact on the expected infrastructure in production, and make the necessary adjustments. They can improve code quality by identifying CPU or memory spikes due to code-related issues, then redesign and run the models again to ensure the fix is in place. This can drastically reduce development cycle times and the hardware and software costs of test environments.

And finally, the exact amount of infrastructure capacity can be reserved for production releases from the insight gained by modeling capacity for the “what if’ scenarios throughout the entire development life cycle. Development has one of the largest global SAP environments, which processes hundreds of thousands of orders every day. They cannot have an infrastructure bottleneck that could cost millions in lost revenue. They model capacity needs for as many “what if” scenarios as possible, so they have a plan in place in case a “what if” turns into a “what now!”

That’s clearly important, because any company that can more easily roll out a new business service, update a production release, or improve a checkout experience is better able to respond to the ebb and flow of competition and capture more sales and customers. And an Operations staff that can cost-effectively deliver the right amount of infrastructure for those improvements will be seen as a strategic asset by the business, not as a bottleneck or cost center.

So Ops team, one way to inject yourself into your company’s DevOps discussion is to share your service (or customer)-oriented capacity planning tool with your Development team and get them to start modeling and reserving the capacity their apps will truly need, once they go into production. In that way, Ops and Dev can truly be one happy family.

Cameron van Orman is VP of Product Strategy & Marketing of the Service Assurance & Infrastructure Management Business Unit at CA Technologies.