Seven leading companies in the AI field on Friday voluntarily committed to help move toward the safe, secure, and transparent development of AI technology. The companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe. To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety,” the White House stated in a press release that contains additional details.
The companies have committed to internal and external security testing of their AI systems before their release to be carried out by independent experts and aim to guard against risks such as biosecurity and cybersecurity, as well as its broader societal effects. The companies must share information across the industry and with governments, civil society, and academia on managing AI risks.
Their commitments will also require investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights and facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
The companies will have to implement robust technical mechanisms, such as watermarking, to ensure users can identify AI-generated content. This approach aims to promote creativity with AI while minimizing fraud and deception risks.
Lastly, they will prioritize research on societal risks associated with AI, with a focus on preventing harmful bias, discrimination, and safeguarding privacy. They will have to publicly disclose their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.