Historically, manually setting up a monitoring system didn’t present a problem because neither the application code, nor the application infrastructure (middleware, app servers, etc.) changed very often. IT would provision a box, set its IP address, load some software, set up the monitoring and then never touch it again for years. 

Nor was an application environment that was built upon traditional technologies likely to fluctuate rapidly in scale; the number of application instances and host servers running at any given moment typically did not change. As a result, manual configuration of the tech stack in static environments didn’t impact the organization’s ability to monitor and manage performance. 

RELATED CONTENT: Leveraging automated APM to accelerate CI/CD

But when workloads move into dynamic environments based on technologies such as containers and on methodologies such as CI/CD, manual performance management strategies and build-it-yourself solutions break down. This is true for several reasons.

Constant change
Change is the only constant in dynamic application environments. The structure of the application changes continuously at every layer. New hosts come online and disappear all the time, with containers making the provisioning even more dynamic. Completely new APIs/services are built and provisioned by the developers – continuously, without checking with operations. Even the application code can change due to improvements or bug fixes at any moment. The closer a team gets to CI/CD, the more frequently changes occur. 

As a result, performance management configuration, monitoring dashboards, dependency mappings and alerting rules must be able to evolve automatically to keep pace with the environment they are monitoring. Otherwise, IT teams lack the necessary accurate visibility into the environments they manage, which leaves the organization at great risk of failure that could impact end-users.

Complex dependencies
These constant changes impact the actual dependencies between different components. Any specific service depends on a unique vertical stack of software as well as data (or processing) from other services. 

Why is knowing dependency important? Troubleshooting! Getting to the root cause of an issue in complex environments is an exercise in dependency analysis. What’s causing slow/ erroneous requests? Requests traverse many services across many frameworks and infrastructures, so to answer that question, understanding structural dependencies of every request is invaluable. But as we detailed in the last paragraph, dependencies constantly change!

Attempting to interpret such dependencies manually is simply not feasible—especially when dependencies change quickly due to code deployments or infrastructure scaling. Even if human operators succeed in mapping dependencies manually at one moment in time, their mappings will quickly become outdated. What’s more, manual dependency interpretation is a huge resource drain and takes your best engineers to accomplish.

Complex dependencies
When your environment changes constantly, the rules that your monitoring tools use to determine whether applications and services are healthy need to change continuously as well. If they don’t (which is likely to happen if you depend on manual intervention to update rules), the rules will quickly deteriorate. For example, when a new service is deployed or an orchestrator moves workloads, health check rules will cease to interpret environment dependencies accurately until the rules are updated. 

If rules are not updated manually, monitoring alerts and insights will be based on outdated configurations. This deterioration undercuts visibility and increases the risk of an infrastructure or service failure.

Manual monitoring impedes speed
Tasks such as writing tracing and monitoring code are too time consuming. So is interpreting monitoring information by hand and manually updating performance management rules whenever application code or environment architectures change. 

Simply put, humans can’t support rapidly changing environments without automation. Attempting to manage performance manually will significantly slow down application release cycles. And for the business, it means poor use of the expensive expertise that IT staff represent.

It may sound like a cliché to observe that we live in a fast-changing world. The faster a company can deliver new custom business applications; the more value IT is delivering. And automation is the only to make that happen.

 

Content provided by Instana and SD Times