Last week, seven companies convened at the White House and committed to developing AI technology in a way that is safe, secure, and transparent. 

Now, four of those companies — Anthropic, Google, Microsoft, and OpenAI — have announced that they have teamed up to launch the Frontier Model Forum, an industry organization dedicated to safely and responsibly developing frontier AI models. The Forum defines these as   “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks,” Google explained in a blog post

The Forum has four main objectives. First, they would like to further AI safety research so that frontier models can be developed responsibly with minimal risks. They would like these models to go through “independent, standardized evaluations of capabilities and safety.”

The second goal is to identify best practices that can be provided to the public to help them understand the impact of these technologies, the Forum explained.

Third, they would like to collaborate and share knowledge with policymakers, academics, civil society, and other companies.

And finally, they hope to support development of applications that will address important issues in society, such as climate change mitigation, early cancer detection, and combating cyber threats, according to the Forum. 

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity,” said Brad Smith, vice chair and president of Microsoft.

The next step for the Forum will be to set up an advisory board to guide strategies and priorities. The founders will also establish a charter, governance, and funding. 

The group will work with governments and civil society over the next several weeks to discuss how best to collaborate. 

Anna Makanju, vice president of Global Affairs at OpenAI added: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”