During the COVID-9 crisis, people got a much better sense of the challenges facing enterprise computing when workers — used to being within the company’s firewall — were suddenly telecommuting en masse, working on remote teams and learning new applications. Test and development teams have been painfully aware of this kind of transformation for decades and were, for the most part, ready to respond.

They may remember that there was a time when using enterprise apps meant one person at a time logging into a dumb terminal linked to a mainframe computer. Developers once wrote static code for static systems — where the system itself was a protective shell around the data. But, today there are few boundaries between the enterprise app based in someone’s cloud and a sea of other components generated by this and other apps in other clouds, connected through users on phones and browsers across the supply and demand chain. So the old question pops up: How does one do proper testing and compliance when someone cuts a billion-dollar deal via Facebook chat?

Companies are transitioning to reusable code, component strings that could be triggered automatically across applications or the IoT to simplify and secure transactions for end users, or other machine-triggered events, retaining data for compliance and mining for such things as security and process improvement. This automation, of course, does not simplify transactions, only the use of software tools that hide complexities. Additional processes are necessary, just less-human processes. As a result, we often don’t know exactly what is triggering what inside modern networks or how this is actually impacting the business until errors show up or tests flag an issue. We may discover unwanted users designing ingenious schemes to illegally seize control of assets they will hold for ransom, hopefully before they fully engage.

All this puts a high importance on continuous testing scenarios that apply artificial intelligence and machine learning to try to determine whether alerts are the result of coding errors, business anomalies, security breaches or processing errors and then learn to spot similar events. Thus, the red flag is just the beginning of detective work to route the issue toward the right skills for a swift resolution, not necessarily the “aha” moment itself.

Some offer AI and ML as the solution, to put artificial detectives to work on the most difficult system triggers. Parasoft sees this differently. We want our clients to put their best minds on the toughest problems. This is where you need creative problem-solving and this is where these highly skilled individuals with solid domain expertise shine most brightly, where they are challenged and motivated. This is why you develop talent.

If we can save clients time and effort, it should be in the more mundane issues typically handled by your lesser sleuths, the ones of more marginal performance who are going to be de-motivated by the routine drudgery of chasing “the usual suspects” — the roughly 80 percent of test failures that can easily be attributed to a coding or process-type error. Keeping the test-bed clean will keep test maintenance costs down to earth and reduce job creep, while growing the system’s ability to learn and evolve. This is where AI and ML play best showing solid return. It’s not THE solution, but gives you a handle by focusing your people on the deeper business questions that arise where they can make a real difference.

Plus this makes the human hours far more productive, leaving your best talent to innovate, not just test code but business processes with an eye toward on-going performance. Testing is not a coding spell check; it’s about seeing the multiple big pictures of development, security, process and performance — the deeper impacts of issues that are harder to define. 

As with supply chain issues, one lowers the river’s flow to focus and take a closer look at the underlying rocks that impede flow. Then you put your best people to work removing them. The end result is higher performance and greater efficiency. It’s much the same with continuous testing. It all comes back to performance where the rubber of process meets the road of markets determining profitability. At the end of the day, it’s more about your business far more than your technology.

Learn more at www.parasoft.com.


Content provided by SD Times and Parasoft.