What risks does the world's first law on artificial intelligence pose and why does EU need it?
On 13 March, the European Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights. It still needs to be approved by the European Council and will fully come into effect in 24 months.
The EU, which is not a technological leader in AI development, with this move represents a significant claim to political and legal leadership in the global race against the US and China. This claim, however, does not guarantee success, write Daryna Boyko and Ivan Horodyskyi from the Dnistriasnkyi Centre in their article Outpacing US and China: Why EU is creating the world's first AI regulation.
The EU Artificial Intelligence Act further expands the famous Brussels bureaucracy. A key role in its implementation will be played by the newly established AI Office, which will ensure compliance with the new rules, monitor adherence to the Act, and investigate violations of its norms.
The Act provides for two special categories of AI: "banned" and "high-risk".
"Banned" systems include technologies for emotion recognition and behaviour manipulation, social scoring, and predictive policing, as well as most biometric identification systems.
AI technologies that can harm fundamental human rights are classified as "high-risk".
The new legislation has received significant political support, but the main resistance to its adoption comes from big business.
As expected, the EU AI Act has received critical reactions from corporations.
Companies planning to deploy their AI technologies internationally see the rules as a threat to continuing their operations in the EU market.
As early as June of 2024, over 150 major corporations from various sectors, including Siemens, Airbus, and Danone, signed an open letter urging the EU to rethink its plans to regulate AI. The authors argued that the approved rules could undermine the EU's technological potential and "jeopardise Europe’s competitiveness and technological sovereignty."
Following the adoption of the Act, one of the largest technology companies, META, opposed any EU measures that could restrain innovation. "It is critical we don't lose sight of AI’s huge potential to foster European innovation and enable competition, and openness is key here," said Marco Pancini, META's head of EU affairs.
Not only large companies are opponents of this regulation though.
The governments of France and Germany also expressed concerns, stating that its mandatory rules would harm European startups, and that options for "self-regulation" should be considered.
"Europe is now a global standard-setter in trustworthy AI," said Thierry Breton, European Commissioner for Internal Market.
In this sense, the EU's leadership in regulating the development and AI use is already indisputable. However, legal leadership does not guarantee technological success.
Equally important will be further regulation of the industry by other global leaders – China, the US, and the UK, which prioritise technological development and industry growth in their AI strategies.
Regulation in these jurisdictions may lead to real fears for businesses: European companies may relocate their operations to more favourable jurisdictions.