OpenAI is announcing a new grant program to help companies who are working on making superintelligent systems safe, as the company believes superintelligence could be achieved within the next decade.

According to the company, these advanced systems will “be capable of complex and creative behaviors that humans cannot fully understand.” 

The current technique for ensuring AI systems are safe — a process called alignment — uses reinforcement learning from human feedback (RLHF). This relies on human supervision, so it may not be as effective when dealing with the advanced use cases that superintelligent AI will facilitate. For instance, if an AI generates millions of lines of complex code, it wouldn’t be possible for humans to evaluate all of it. 

“This leads to the fundamental challenge: how can humans steer and trust AI systems much smarter than them? This is one of the most important unsolved technical problems in the world. But we think it is solvable with a concerted effort. There are many promising approaches and exciting directions, with lots of low-hanging fruit. We think there is an enormous opportunity for the ML research community and individual researchers to make major progress on this problem today,” OpenAI wrote in a blog post.  

To help support researchers who are solving the challenge of “Superalignment,” OpenAI is creating a grant program with $10 million in it. 

It will be dedicating $100,000 to $2 million to academic labs, nonprofits, and individual researchers. OpenAI is also launching one-year $150,000 fellowships for graduate students (half will go to research funding and the other half will be a stipend).

According to OpenAI, prior experience on alignment isn’t required and they are ready to support researchers who haven’t done any work on that yet. 

Applications will remain open until February 18th and applicants will hear back within four weeks of the application deadline. 

This is part of OpenAI’s larger Superalignment Project, which it announced in July. OpenAI formed a new team within the company and said it was dedicating 20% of its compute over the next four years to tackle the challenge. The goal of the project is to develop a “roughly human-level automated alignment researcher.”