A few weeks ago, the National Institute of Standards and Technology (NIST) released Dioptra, an open source tool for testing the trustworthiness of AI models. 

Dioptra offers a common platform for assessing models throughout their life cycle, from when they are being developed to when they are acquired by other parties who want to then ensure trustworthiness. 

“Our systems increasingly rely on Machine Learning (ML) algorithms and models to perform essential functions,” NIST wrote in a post. “As users of these systems, we must implicitly trust that the models are working as designed. Establishing the trustworthiness of an ML model is especially hard, because the inner workings are essentially opaque to an outside observer.”

NIST defines several characteristics that a trustworthy AI model must have: “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair – with harmful bias managed.”

It offers several key features, including reproducibility of tests, traceability of tests, support for expanding functionality through plugins, interoperability among plugins, user authentication, an intuitive web interface, and the ability to be deployed in a multi-tenant environment where users can share and reuse components. 

It was designed to support NIST’s AI Risk Management Framework, which was announced in January 2023 to manage risks of AI to individuals, organizations, and society as a whole. Specifically, it fulfills the “Measure” component of the framework by providing tooling for assessing, analyzing, and tracking AI risk. 

“Initiatives like Dioptra are vital in ensuring AI technologies are developed and used ethically, reinforcing the commitment to safeguarding AI systems while promoting innovation,” said Michael Rinehart, vice president of artificial intelligence at Securiti, a company that also provides AI safeguards. 


Read about other recent Open-Source Projects of the Week: