Many articles on DevOps tooling attempt to distill an ocean of products into a more drinkable process puddle, like for example: build automation, continuous integration, configuration management, and infrastructure deployment. And it makes sense, since any process that facilitates faster deployments and shorter lead times can result in the business benefits outlined in surveys like the State of DevOps report.
However, this fixation on speeds and feeds misses a crucial component of DevOps – the fundamental necessity to ensure tight feedback within and across the software pipeline – ensuring what spits out at the end delivers a flawless and high-quality customer experience.
All of this is well understood, but with new tools and methods promising more application abstraction from the underlying nuts and bolts, it’s easy to jump on board the Continuous Delivery bullet train. Containers, for example, break the tight coupling between app and infrastructure, enabling teams to work independently, while new serverless architectures (such as Amazon Lambda) allow code execution without installing software, provisioning machines, deploying containers or worrying about the low-level detail.
But the devil is in the detail.
However, as great as these technologies are, high-performing IT organizations understand implicitly that you can’t abstract away one essential element every team member owns – delivering a high-quality customer experience. It’s why reports like the Coleman Parkes’ Accelerating Velocity and Customer Value with Agile and DevOps also illustrate that while DevOps gunslingers are quick on the draw, they also nail their quality targets.
It’s the end of IT Operations – as we know it
So if teams can abstract away traditional bottlenecks and choke points, shouldn’t the velocity benefits outweigh the disenfranchisement of an operations function? If new tools and empowering technologies can act as a circuit breaker to old-style standardization and change management dictates, why shouldn’t they be openly adopted?
We can argue until the proverbial cows come home on this, but two things remain irrefutable. First, maintainability, supportability, and resilience (all the goodness that constitutes a quality customer experience) aren’t a given with any new technology, and secondly, IT operations itself must transform from a ‘break-fix’ function into a practice that leverages insights to craft process improvements across the software pipeline.
But gaining insights is a tough gig. Distributed applications are inherently complex and the ephemeral nature of systems mean traditional rule-based alerting systems fall short. Worse still, many tools are used in isolation behind the production curtain to address every specific pain point. This just doesn’t scale ends and up creating more ‘keyhole’ views into disjointed information.
Not drowning, waving: Surviving the IT Ops Big Data deluge
Analytics is now touted as an answer to the Big Data challenge and the cognitive overload facing IT operations. By incorporating feedback across the software pipeline based on algorithmic insights (not best guesses or knee-jerk reactions), teams can action improvements immediately and in context of business goals. For example:
- Predictive analytical models could be used to determine the best application designs needed to delight more customers – before one line of code is written or any cloud contract signed.
- Proven statistical methods could be invoked to determine which coding practices correlate to the greatest business value in terms of customer engagement and conversions.
- Deep, real-time, fact-based insights into application performance could identify third-party web services that negatively impact the customer experience and can be eliminated.
- Using control theory and machine learning, operations teams could right-size cloud instances and better predict the optimum scaling of resources in real-time to meet customer demand.
- Accurate anomaly detection with no false-positive alarms can surface hidden complex problems, reduce staff-burnout, and prevent error-prone and unnecessary failovers.
What’s exciting about all these use cases isn’t the math per se, it’s how insights have enabled operations to shift gears; from a back-office, keep-the-lights-on function to a discipline focused on driving improvement across the software pipeline. Rather than being engaged in religious vetting processes to stem the inevitable tide of technology goodness, IT operations has become an essential cog in the experimental wheel that business must keep spinning to evolve.
In a business context, operational insights gained from analytics are wonderful things. They allow us to gain control over the things we should care about – customer experience sure, but also cloud cost models and SLA’s, together with improving profit margins from every digital interaction. As so aptly put by Mckinsey and Company, it’s about making data work for you, rather than the other way around – giving purpose to data and translating it into action.
Analytics – More than hype, but also non-trivial
As with any new technology, it’s easy to get carried away with the spin and forget to read the small print. Trying to ingest, normalize and correlate gazillions of time-series data points from an eclectic mix of services constantly spewing out machine logs, alarms and counters using proprietary monolithic methods could become nothing more than a costly science project. On the other hand, point solutions lack the contextual secret sauce and fail to deliver sustainable value since they assume users actually know what they’re looking for.
Gaining actionable insights across composable architectures requires analytics applications that are – well, composable. Modern approaches deliver interchangeable components that can scale to address thorny dimensionality and missing data issues. These solutions inject both proven and new analytics and correlation techniques with the necessary relevance, context, and the operational knowledge gained in managing everything from big iron to containers and software-defined networks.
Distilling a complex toolchain into core DevOps processes that help meet businesses’ insatiable demand for software delivered at speed is only one side of the equation. On the other side is operational analytics – use purposefully and every digital initiative will be better equipped to solve the most complex transformational problems.