With all the uptake over AI technology like GPT over the past several months, many are thinking about the ethical responsibility in AI development.

According to Google, responsible AI means not just avoiding risks, but also finding ways to improve people’s lives and address social and scientific problems, as these new technologies have applications in predicting disasters, improving medicine, precision agriculture, and more. 

“We recognize that cutting-edge AI developments are emergent technologies — that learning how to assess their risks and capabilities goes well beyond mechanically programming rules into the realm of training models and assessing outcomes,” Kent Walker, president of global affairs for Google and Alphabet, wrote in a blog post

Google has four AI principles that it believes are crucial to successful AI responsibility. 

First, there needs to be education and training so that teams working with these technologies understand how the principles apply to their work. 

Second, there needs to be tools, techniques, and infrastructure accessible by these teams that can be used to implement the principles.

Third, there also needs to be oversight through processes like risk assessment frameworks, ethics reviews, and executive accountability. 

Fourth, partnerships should be in place so that external perspectives can be brought in to share insights and responsible practices. 

“There are reasons for us as a society to be optimistic that thoughtful approaches and new ideas from across the AI ecosystem will help us navigate the transition, find collective solutions and maximize AI’s amazing potential,” Walker wrote. “But it will take the proverbial village — collaboration and deep engagement from all of us — to get this right.”

According to Google, two strong examples of responsible AI frameworks are the U.S. National Institute of Standards and Technology AI Risk Management Framework and the OECD’s AI Principles and AI Policy Observatory. “Developed through open and collaborative processes, they provide clear guidelines that can adapt to new AI applications, risks and developments,” Walker wrote.  

Google isn’t the only one concerned over responsible AI development. Recently, Elon Musk, Steve Wozniak, Andrew Yang, and other prominent figures signed an open letter imploring tech companies to pause development on AI systems until “we are confident that their effects will be positive and their risks will be manageable.” The specific ask was that AI labs pause development for at least six months on any system more powerful than GPT-4. 

“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall,” the letter states.