Google has long been using AI to make its products more useful, and now it is releasing its list of seven AI principles it will follow in assessing AI. The seven principles are: 

  1. AI has to benefit society, taking into account a range of social and economic factors. It will only proceed with technology when the overall benefits significantly outweigh the foreseeable risks.
  2. AI has to avoid creating or reinforcing unfair bias, particularly in regards to characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
  3. AI has to be built and tested for safety.
  4. AI has to be accountable to people, allowing for opportunities for feedback, relevant explanations, and appeal.
  5. AI has to incorporate privacy design principles, giving opportunity for notice and consent, encouraging architectures with privacy safeguards, and providing the appropriate level of transparency and control over data use.
  6. AI has to uphold high standards of scientific excellence, drawing on scientific and multidisciplinary approaches. Google will share AI knowledge by publishing educational materials, best practices, and research that will enable more people to develop useful AI applications.
  7. AI has to be made available for uses that accord with these principles. It will evaluate uses based on primary purpose and use, nature and uniqueness, scale, and the nature of Google’s involvement.

“We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right,” wrote Sundar Pichai, CEO of Google, in a post.

In addition to announcing these principles, the company also announced four applications it will not pursue: those that cause or are likely to cause harm, weapons, technologies that gather information for surveillance that violate internationally accepted norms, and technologies whose purpose contradicts accepted principles of international law and human rights.

“While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices,” Pichai wrote.