In the 1990’s, when software started to become ubiquitous in the business world, quality was still a big issue. It was common for new software and upgrades to be buggy and unreliable, and rollouts were difficult. Software testing was mostly a manual process, and the people developing the software typically also tested it. Seeing a need in the market, consultancies started offering outsourced software testing. While it was still primarily manual, it was more thorough. Eventually, automated testing companies emerged, performing high-volume, accurate feature and load testing. Soon after, automated software monitoring tools emerged, to help ensure software quality in production. Eventually, automated testing and monitoring became the standard, and software quality soared, which of course helped accelerate software adoption. 

AI model development is at a similar inflection point. AI and Machine Learning technologies are being adopted at a rapid pace, but quality varies. Often, the data scientists developing the models are also the ones manually testing them, and that can lead to blind spots. Testing is manual and slow. Monitoring is nascent and ad hoc. And AI model quality is suffering, becoming a gating factor for the successful adoption of AI. In fact, Gartner estimates that 85 percent of AI projects fail.

The stakes are getting higher. While AI was first primarily used for low-stakes decisions such as movie recommendations and delivery ETAs, more and more often, AI is now the basis for models that can have a big impact on people’s lives and on businesses. Consider credit scoring models that can impact a person’s ability to get a mortgage, and the Zillow home-buying model debacle that led to the closure of the company’s multi-billion dollar line of business buying and flipping homes. Many organizations learned too late that Covid broke their models – changing market conditions left models with outdated variables that no longer made sense (for instance, basing credit decisions for a travel-related credit card on volume of travel, at a time when all non-essential travel had halted).

Not to mention, regulators are watching.

Enterprises must do a better job with AI model testing if they want to gain stakeholder buy-in and achieve a return on their AI investments. And history tells us that automated testing and monitoring is how we do it.

Emulating testing approaches in software development

First, let’s recognize that testing traditional software and testing AI models require significantly different processes. That is because AI bugs are different. AI bugs are complex statistical & data anomalies (not functional bugs), and the AI blackbox makes it really hard to identify and debug them. As a result, AI development tools are methodologies that are immature and not prepared for dealing with high stakes use cases.  

AI model development differs from software development in three important ways:

  • It involves iterative training/experimentation vs being task and completion oriented;
  • It’s predictive vs functional; and 
  • Models are created via black-box automation vs human designed.

Machine Leading also presents unique technical challenges that aren’t present in traditional software – chiefly:

  • Opaqueness/Black box nature
  • Bias and fairness
  • Overfitting and unsoundness
  • Model reliability
  • Drift

The training data that AI and ML model development depend on can also be problematic. In the software world, you could purchase generic software testing data, and it could work across different types of applications. In the AI world, training data sets need to be specifically formulated for the industry and model type in order to work. Even synthetic data, while safer and easier to work with for testing, has to be tailored for a purpose. 

Taking proactive steps to ensure AI model quality

So what should companies leveraging AI models do now? Take proactive steps to work automated testing and monitoring into the AI model lifecycle. 

A solid AI model quality strategy will encompass four categories:

  • Real-world model performance, including conceptual soundness, stability/monitoring and reliability, and segment and global performance.
  • Societal factors, including fairness and transparency, and security and privacy
  • Operational factors, such as explainability and collaboration, and documentation
  • Data quality, including missing and bad data

All are crucial towards ensuring AI model quality. 

For AI models to become ubiquitous in the business world – as software eventually did – the industry has to dedicate time and resources to quality assurance. We are nowhere near the five nines of quality that’s expected for software, but automated testing and monitoring is putting us on the path to get there.