Time (and tide) wait for no man, the expression goes. But then there’s latency.
Back in the day, before the Internet, before mobile phones and newer application architectures, people for the most part had some patience. If it took a few seconds for a video to load, we waited. If we went to a website and got a 404 error, that was OK. We’d try back later. Not today. It’s an instant gratification world. ‘I want it now, and if you can’t give it to me now, I’ll get it from someone else, or just forget about it altogether.’
RELATED CONTENT: With microservices, more isn’t always better
And the biggest enemy to this new way of retrieving information is latency.
But latency is not just the time it takes for an application to fulfill your request. For businesses today, it’s also the time it takes for a customer to reach your help desk, or the time it takes to find out whether you can get an order delivered by a particular date. Or, getting the information quickly enough from a predictive analytics system to prevent a machine from failing.
I was talking recently with Monte Zweben, the CEO of application modernization data platform provider Splice Machine, about this very subject. “These [latencies] all manifest themselves into incredible dollars. It might be outages, it might be customer experiences that are negative, tainting their brands. It all comes down to the fact that they can’t act quick enough.”
And one big area of latency organizations are experiencing is with their legacy applications. As companies look to modernize these applications and the systems they run on, they have three options, according to Zweben: throw away their custom applications, that give them differentiation from their competitors; use a common, SaaS-like application, in which everybody’s got the same things; or customize the common app, but that comes with its own problems.
There is a fourth option, which most companies undertake: Rewrite the legacy app. But, to quote the movie Animal House, “that could take years, and cost thousands of lives.”
“You see companies wholeheartedly rewriting entire elements of their portfolios that drive critical business processes, whether that’s customer service, whether that’s predictive maintenance, whether that’s inventory optimization, whether that’s fraud detection in financial circles,” Zweben said. “The reason why this is fundamentally about latency is that when they rewrote, they used an architecture that had all the components of trying to build smart, digital processes that were data-driven and using artificial intelligence or machine learning, and these processes required lots of different compute engines to be duct-taped together. We talk about it in context of a technical solution, and the technical solution is one of integrating the computational workloads that these companies need to run their applications that are purpose-built, to analyze those applications and derive insights, and to inject their artificial intelligence into them. The thing that we missed was we always talked about it in terms of the technical solution, but we didn’t talk about it in the context of the customer’s pain. The customer’s pain is that when they used these modern new computational engines and duct-taped them together, yes, it was complex, yes it requires lot of engineers, yes it was more expensive, but that wasn’t the point. And the point is that when you have a disintegrated system that is sort of moving large volumes of data around in order to accomplish this digital transformation, the companies experience latency.”
The systems that run business applications are OLTP, and the systems that analyze data are OLAP systems. Data moving from one system to the other creates latency. Zweben explained that machine learning can be used to solve that problem.
‘An example of that is thinking about bad actors, thinking about people committing fraud, or money-laundering, or cyberthreat,” he said. “They’re constantly changing their techniques. You can’t think of IT and applications as, ‘Build your application, go through testing, and deploy it.’ Now, with a smart application, you have to actually be continuously training new models. And, as part of that data science process, it’s not just about retraining, but you literally have to change the models. And typically, in machine learning, a data scientist is trying to think about how do they transform the raw data coming from claims, coming from policies, coming from the demographics on customers, maybe even social feeds from those customers, and any exogenous data they can get, and turn that raw data into features that can predict something. And that process is still an art that data scientists do. They end up on an every-day basis coming up with new features and running new experiments, trying new algorithms, tweaking the parameters of these algorithms, and they’re constantly doing these experiments, and it becomes a mess. It’s difficult for them to control this.”
Splice Machine has built a new machine learning workbench that empowers data scientists to keep up with the bad actors or the changing markets and continuously be deploying new machine learning models on mission-critical applications. This, Zweben explained, means organizations can more accurately derive business value and make better business decisions based on data, thus, as Zweben said, “reducing the latency of when you inject a model into an app from the next time you update it.”
That’s another kind of latency that is different today, because intelligent apps didn’t exist 20 years ago. Today, data scientists are driving everything based on running experiments on data that’s constantly changing. According to Zweben, “What’s different now is the fact that you’re operating on data from multiple sources, in vast volumes, and running experiments in order to converge on better models that need to be injected into the applications.”
As they hardly ever say in customer service, “Your estimated wait time is … now reduced.”