The EU has finally passed the AI Act that it has been working on for the past several years. The EU AI Act is a comprehensive law that takes a risk-based approach to AI regulation. 

“The adoption of the AI Act marks the beginning of a new AI era and its importance cannot be overstated,” said Enza Iannopollo, principal analyst at Forrester. “The EU AI Act is the world’s first and only set of binding requirements to mitigate AI risks.”

The new law will rank AI based on potential risks and use that risk level to determine how much regulation is needed. 

In an upcoming episode of the SD Times podcast, Duane Pozza, who is a former assistant director of the FTC and now an AI lawyer at Wiley Rein LLP, says that: “What’s interesting here is that it focuses in large part on what it calls high-risk AI. So, there are a lot of requirements, particularly around investments and controls around safety, that will apply when AI is used for a whole category of higher risk use cases … really putting guardrails in those areas and then having a lighter touch … with AI that might be used for other purposes that are sort of in the lower risk spectrum.”

The EU considers the following uses to be high-risk: critical infrastructure, education and vocational training, employments, essential services, certain law enforcement systems, migration and border management, and justice and democratic processes. AI use in those systems will require steps to be taken to reduce the risk, such as maintaining use logs, providing transparency into systems, and having human oversight. 

According to the EU, citizens can also submit formal complaints about AI systems if they believe they are impacting their rights. 

General-purpose AI models will also be subject to transparency requirements and comply with EU copyright law. Creators of those models will have to publish detailed summaries of what data they used to train those models. Deepfake images, audio, and video will also have to be labeled clearly to let people know they have been altered by AI. 

“The goal is to enable institutions to exploit AI fully, in a safer, more trustworthy, and inclusive manner,” said Iannopollo. “Like it or not, with this regulation, the EU establishes the ‘de facto’ standard for trustworthy AI, AI risk mitigation, and responsible AI. Every other region can only play catch-up.” 

Iannopollo recommends companies start organizing AI compliance teams now so that they are ready to meet the requirements. She said that complying with the regulation will “require strong collaboration among teams, from IT and data science to legal and risk management, and close support from the C-suite.”

“The EU has delivered,” said Dragos Tudorache, member of the European Parliament and co-rapporteur of the Civil Liberties Committee. “We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice.”