Dan Nechita has spent the past year shuttling back and forth between Brussels and Strasbourg. As the head of cabinet (essentially chief of staff) for one of the two rapporteurs leading negotiations over the EU's proposed new AI law, he's helped hammer out compromises between those who want the technology to be tightly regulated and those who believe innovation needs more space to evolve.
The discussions have, Nechita says, been “long and tedious.” First there were debates about how to define AI—what it was that Europe was even regulating. “That was a very, very, very long discussion,” Nechita says. Then there was a split over what uses of AI were so dangerous they should be banned or categorized as high-risk. “We had an ideological divide between those who would want almost everything to be considered high-risk and those who would prefer to keep the list as small and precise as possible.”
But those often tense negotiations mean that the European Parliament is getting closer to a sweeping political agreement that would outline the body’s vision for regulating AI. That agreement is likely to include an outright ban on some uses of AI, such as predictive policing, and extra transparency requirements for AI judged to be high-risk, such as systems used in border control.
This is only the start of a long process. Once the members of the European Parliament (MEPs) vote on the agreement later this month, it will need to be negotiated all over again with EU member states. But Europe’s politicians are some of the first in the world to go through the grueling process of writing the rules of the road for AI. Their negotiations offer a glimpse of how politicians everywhere will have to find a balance between protecting their societies from AI’s risks while also trying to reap its rewards. What’s happening in Europe is being closely watched in other countries, as they wrestle with how to shape their own responses to increasingly sophisticated and prevalent AI.
“It’s going to have a spillover effect globally, just as we witnessed with the EU General Data Protection Regulation,” says Brandie Nonnecke, director of the CITRIS Policy Lab at the University of California, Berkeley.
At the core of the debate about regulating AI is the question of whether it's possible to limit the risks it presents to societies without stifling the growth of a technology that many politicians expect to be the engine of the future economy.
The discussions about risks should not focus on existential threats to the future of humanity, because there are major issues with the way AI is being used right now, says Mathias Spielkamp, cofounder of AlgorithmWatch, a nonprofit that researches the use of algorithms in government welfare systems, credit scores, and the workplace, among other applications. He believes it is the role of politicians to put limits on how the technology can be used. “Take nuclear power: You can make energy out of it or you can build bombs with it,” he says “The question of what you do with AI is a political question. And it is not a question that should ever be decided by technologists.”