Meta’s AI team has announced the release of a new project called Purple Llama that will include tools and evaluations to assist in responsibly building AI models.
The components within Purple Llama will be released with a permissive license to enable collaboration by the community and to help standardize the development of trust and safety tools.
RELATED CONTENT: IBM and Meta lead 50 other organizations to form global AI Alliance
Its initial release will focus on tools that help address the risks outlined in President Biden’s Executive Order on AI. These include developing metrics for quantifying LLM cybersecurity risks, tools for evaluating the frequency of insecure code suggestions, and tools to evaluate LLMs to make it harder for them to be used to generate malicious code or help execute cyberattacks.
In addition to this announcement, Meta also announced Llama Guard, which is a safeguard model that is designed for Human-AI conversation use cases.
Meta says that it will collaborate with a number of organizations in its efforts, including AI Alliance, AMD, Anyscale, AWS, Bain, Cloudflare, Databricks, Dropbox, Google Cloud, Hugging Face, IBM, Intel, Microsoft, MLCommons, NVIDIA, Oracle, Orange, Scale AI, and Together.AI.
“Collaboration on safety will build trust in the developers driving this new wave of innovation, and requires additional research and contributions on responsible AI. The people building AI systems can’t address the challenges of AI in a vacuum, which is why we want to level the playing field and create a center of mass for open trust and safety,” Meta wrote in a blog post.