Recent advancements in artificial intelligence are helping organizations move faster and make more intelligent decisions. However, while organizations know AI can be good for business, they don’t always know exactly how it works.

Eighty-two percent of enterprises are interested in using AI, but 60 percent worry about liability issues while 63 percent don’t believe they have the proper in-house talent to manage it, IBM research shows.

IBM is releasing a new software service that will enable organizations not only to trust their AI systems to make decisions, but will provide insight into how it came to decisions and why. According to the company, this will automatically detect bias, provide greater understanding, simplify management, and make AI more transparent.

“IBM led the industry in establishing Trust and Transparency principles for the development of new AI technologies,” said Beth Smith, general manager of Watson AI at IBM. “It’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

The IBM Services will run on the IBM cloud and feature capabilities from popular machine learning frameworks such as Watson, Tensorflow, SparkML, AWS SageMaker and AzureML.

In addition, the solution can be customized to an organization’s specific use, detect bias at runtime, capture potential unfair outcomes, and recommend mitigations for any detected bias.

As part of the IBM Services announcement, the company is also releasing an AI bias detection and mitigation toolkit to the open-source community in order to provide great education and collaboration around AI. The AI Fairness 360 toolkit will feature a variety of algorithms, code and tutorials for detecting bias in machine learning models.

IBM will also be providing consulting services to help companies build, manage and deploy AI safely as well as minimize the risk of any bias decision making from those systems.

“As AI advances, and humans and AI systems increasingly work together, it is essential that we trust the output of these systems to inform our decisions. Alongside policy considerations and business efforts, science has a central role to play: developing and applying tools to wire AI systems for trust. IBM Research’s comprehensive strategy addresses multiple dimensions of trust to enable AI solutions that inspire confidence,” IBM wrote on its website.