Google is making it easier for companies to build generative AI responsibly by adding new tools and libraries to its Responsible Generative AI Toolkit.

The Toolkit provides tools for responsible application design, safety alignment, model evaluation, and safeguards, all of which work together to improve the ability to responsibly and safely develop generative AI. 

Google is adding the ability to watermark and detect text that is generated by an AI product using Google DeepMind’s SynthID technology. The watermarks aren’t visible to humans viewing the content, but can be seen by detection models to determine if content was generated by a particular AI tool. 

“Being able to identify AI-generated content is critical to promoting trust in information. While not a silver bullet for addressing problems such as misinformation or misattribution, SynthID is a suite of promising technical solutions to this pressing AI safety issue,” SynthID’s website states. 

The next addition to the Toolkit is the Model Alignment library, which allows the LLM to refine a user’s prompts based on specific criteria and feedback.  

“Provide feedback about how you want your model’s outputs to change as a holistic critique or a set of guidelines. Use Gemini or your preferred LLM to transform your feedback into a prompt that aligns your model’s behavior with your application’s needs and content policies,” Ryan Mullins, research engineer and RAI Toolkit tech lead at Google, wrote in a blog post

And finally, the last update is an improved developer experience in the Learning Interpretability Tool (LIT) on Google Cloud, which is a tool that provides insights into “how user, model, and system content influence generation behavior.”

It now includes a model server container, allowing developers to deploy Hugging Face or Keras LLMs on Google Cloud Run GPUs with support for generation, tokenization, and salience scoring. Users can also now connect to self-hosted models or Gemini models using the Vertex API. 

“Building AI responsibly is crucial. That’s why we created the Responsible GenAI Toolkit, providing resources to design, build, and evaluate open AI models. And we’re not stopping there! We’re now expanding the toolkit with new features designed to work with any LLMs, whether it’s Gemma, Gemini, or any other model. This set of tools and features empower everyone to build AI responsibly, regardless of the model they choose,” Mullins wrote.