But are there really any differences? If so, where are they? Some believe these are marketing terms used to differentiate tools. Others point to it as more of an evolution of monitoring. All that said, the performance of your application can be critical to your organization’s bottom line — whether that’s represented financially, or by increased membership, or the number of users on your site.
The key reason for the changes in monitoring is that software and IT architectures are much more distributed. Monolithic applications and architectures are being rewritten as microservices and distributed in the cloud. This, now, requires automation and many companies are also adding machine learning to help in the decision-making process. These two properties define AIOps, though traditional APM vendors are adding automations to their solution sets.
Stephen Elliot, program director of I&O at research firm IDC, said thinking back 10 years ago, application performance was very much about the application itself, specific to the data tied to that app, and it was a silo. “I think now that one of the big differences is not only do you have to have that data, but it’s a much broader set of data — logs, metrics, traces — that is collected in near real-time or real-time with streaming architectures,” Elliot said.
“Customers now have broadened out what they expect in terms of observability versus the traditional APM view,” he continued. “And they’re increasingly expecting more integration points to collect different pieces of data, and some level of analytics that can drive root cause, pattern matching, behavioral analysis, predictive capabilities, to then sort of filter up, ‘Here’s where the problem might be, and maybe, here’s what you should do to fix it.’ They need a lot broader set of data that they trust, and they need to see it in their own context, whether it’s a DevOps engineer, a site reliability engineer, cloud ops, platform engineers.”
AIOps not only looks at the application itself, it takes into account the infrastructure — how the cloud is performing, how the network is performing. The intelligence part comes in where you can train the system to reconfigure itself to accommodate changing loads, to provision storage as needed for data, and the like.
But, before declaring the holy grail of monitoring has been found, Gartner research director Charley Rich cautioned, “Just be aware … APM is a very mature market. In terms of our hype cycle, it’s way past the bump in hype and moving into maturity. AIOps, on the other hand, is just climbing up the mountain of hype. Very, very different. What that means in plain English is that what’s said about AIOps today is just not quite true. You have to look at it from the perspective of maturity.”
What is not quite true about AIOps?
“Oh, that it just automatically solves problems,” said Rich, who is the lead author on Gartner’s APM Magic Quadrant, as well as the lead author on the analysis firm’s AIOps market guide. “A number of vendors talk about self-healing. There are zero self-healing solutions on the market. None of them do it. You know, you and I go out and have a cocktail while the computer’s doing all the work. It sounds good; it’s certainly aspirational and it’s what everyone wants, but today, the solutions that run things to fix are all deterministic. Somewhere there’s a script with a bunch of if-then rules and there’s a hard-coded script that says, ‘if this happens, do that.’ Well, we’ve had that capability for 30 years. They’re just dressing it up and taking it to town.”
But Rich emphasized he didn’t want to be dismissive of the efforts around AIOps. “It’s very exciting, and I think we’re going to get there. It’s just, we’re early, and right now, today, AIOps has been used very effectively for event correlation — better than traditional methods, and it’s been very good for outlier and anomaly detection. We’re starting to see in ITSM tools more use of natural language processing and chatbots and virtual support assistants. That’s an area that doesn’t get talked about a lot. Putting natural language processing in front of workflows is a way of democratizing them and making complex things much more easily accessible to less-skilled IT workers, which improves productivity.”
Indeed, many organizations today are gaining greater event detection, correlation and remediation through the use of AI and machine learning in their monitoring. But to achieve that, organizations have to rethink the tools they use and the way they monitor their systems.
Is AIOps a better way to do things? Machine learning makes monitoring tools more agile, and through self-learning algorithms they can self-adjust, but that doesn’t necessarily make them AIOps solutions, Rich said.
“Everybody’s doing this,” he pointed out. “We in the last market guide segmented the market of solutions into domain-centric and domain-agnostic AIOps solutions. So domain-centric might be an APM solution that’s got a lot of machine learning in it but it’s all focused on the domain, like APM, not on some other thing. Domain-agnostic is more general-purpose, bringing in data from other tools. Usually a domain-agnostic tool doesn’t collect, like a monitoring tool does. It relies on collectors from monitoring tools. And then, at least in concept, it can look across different data streams, different tools, and come up with a cross-domain analysis. That’s the difference there.”
One of the things pundits tell us is required to implement many new technologies is a change in culture, as if that were as simple as changing a pair of socks. Often, when they talk about culture change, they are really talking about learning new skills and reorganizing teams, not really changing the way they work.
In the case of monitoring, Joe Butson, co-founder of consulting company Big Deal Digital, sees automation in AIOps enabling a shift from the finger-pointing often associated with incidents to a healthier acceptance that problems are going to happen.
“One of things about the culture change that’s underway is one where you move away from blaming people when things go down to, we are going to have problems, let’s not look for root cause analysis as to why something went down, but what are the inputs? The safety culture is very different. We tended to root cause it down to ‘you didn’t do this,’ and someone gets reprimanded and fired, but that didn’t prove to be as helpful, and we’re moving to a generative culture, where we know there will be problems and we look to the future.”
AIOps is driving this organizational culture change by adding automation to their systems, which allows companies to respond in a proactive way rather than reacting to incidents as they occur, Butson explained.
“It’s critical to success because you can anticipate a problem and fix it. In a perfect world, you don’t even have to intervene, you just have to monitor the intervention and new resources are being added as needed,” he said. “You can have the monitoring automated, so you can auto-scale and auto-descale. You’d study traffic based on having all this data, and you are able to automate it. Once in a while, something will break and you’ll get a call in the middle of the night. But for the most part, by automating it, being able to take something down, or roll back if you’re trying to bring out some new features to the market and it doesn’t work, being able to roll back to the last best configuration, is all automated.”
Further, Butson said, machine learning empowers organizations to remove the human component “to a very large case.” Humans will still be reviewing the assumptions made behind the automations, but, he noted, “Every month, every year, machine learning is taking out more of the guesswork because of the data.”
Similarly, application monitoring runs on a certain set of assumptions as to how an application should behave, how the network should perform, and other metrics the organization deems as critical. So you have those assumptions, but then, Butson asked, how do you deal with the anomalies? “You prepare for the anomalies,” he said, “and that’s a different kind of culture for all of us.”
The human element
Gartner’s Rich said what people want is the algorithms to adapt to what you’re doing and analyze the current situation. This, he said, is a legitimate want, but no one really has yet. “It’s very hard because you don’t get all the signals that say, ‘this is the problem.’ You have to infer that from a lot of data, and then look at the past and look at topology, and come up with, ‘this is the best solution we can recommend to do this based on what you’ve used — then run the solution, rate itself and then improve it the next time. That cycle of continuous improvement is just not there.”
Further, he said, as you think about it, would you want machine to do that? Risk is the key determining factor in how much automation organizations will enable. “If it’s a password change, someone says they want to update their password, sure, the machine can do that. If the solution is ‘boot the server,’ like we do with Windows, or start up a new container, there’s no risk. But if the solution is ‘reconfigure the commerce server.’ and the downside might be we can’t book any orders today, would you want the machine doing that? No.”
IDC’s Elliot said it’s a matter of trust. Teams have to trust that the algorithm is going to do what it says it’s going to do correctly. “You can see the aspirations, and some tools emerging that are driving automated decision-making. For example, resizing a reserve instance on AWS, or shutting a reserve instance down, or potentially maybe moving storage automatically. There are different tasks that can be done automated that can be trusted and can be executed via policy. We are seeing that, and expect more of it down the road as customers get comfortable with replacing some of these manual tasks with automated, event-driven decision-making.”
Moving to AIOps won’t be quick; often it is a multi-year process, and every company will move at their own pace and scale. But automation is here and will only get better. “Even the public cloud providers really see automation as a way to differentiate their own platforms. And that’s pretty critical when customers hear, ‘You can put certain types of workloads on our platform, and because you’re using our platform, we are embedding automated capabilities onto those workloads.’ “