European lawmakers are warning against proposed changes to the European Union’s landmark artificial intelligence act, saying these moves could let major US tech companies avoid core elements of the AI Act.
The AI Act’s architects believe that shifting key rules from mandatory to voluntary would undermine efforts to prevent harmful content and election meddling by firms such as OpenAI and Google.
In a letter to the commission’s digital chief, Henna Virkkunen, prominent members of the European parliament called this plan “dangerous, undemocratic and creates legal uncertainty.”
The letter’s signatories include several MEPs who helped draft the act, as well as Carme Artigas, the former Spanish minister for digitalization and AI. They warn that if the most influential AI providers behave irresponsibly, it could “deeply disrupt Europe’s economy and democracy.”
The current act divides AI systems into three risk categories
The European Commission is now debating whether to relax parts of the law, following intense lobbying by Donald Trump and Big Tech groups. Under the current act, which is considered the strictest regulatory framework for AI, systems are divided into three risk categories. High-risk applications, such as those used in healthcare or public transport, must meet heavier reporting and transparency requirements. Powerful models also face obligations to disclose how they were trained and to avoid generating harmful or false information.
Central to the present dispute is a “code of practice” meant to guide AI companies in meeting these rules. This code is being drafted by a group of experts, including Turing-prize winner Yoshua Bengio, and a final version is expected in May. According to sources familiar with the process, the experts are looking for a way to ensure the law has force while still persuading major tech players to join.
US tech companies have been lobbying against the AI Act
Brussels has encountered stiff resistance from American firms. In February, Meta’s head of global affairs, Joel Kaplan, told a Brussels audience that the code’s provisions would be “unworkable and technically unfeasible.” Meta also says it cannot release its newest multimodal large language models or its latest AI assistant in Europe because of the region’s privacy rules. Google, along with European companies such as Spotify and Ericsson, has also criticized parts of the legislation.
Meanwhile, US vice-president JD Vance, at France’s AI Summit in Paris, called out “excessive regulation of AI” and insisted that “AI must remain free from ideological bias.” These remarks came amid a push by the Commission, under its new term that began in December, to attract more AI investment. As part of that goal, the Commission recently withdrew a proposed AI liability directive, portraying it as part of a broader deregulatory agenda.

Despite this emphasis on investment, Virkkunen stressed at the Financial Times’ Enabling Europe’s AI Ambitions event on Tuesday that the code of practice should help businesses, including small and medium-sized enterprises, by offering guidelines “not setting more obstacles or reporting obligations.” She emphasized that fundamental principles remain intact. “We want to make sure that Europe has a fair, safe and democratic environment also when it comes to the digital world,” she said.
Lawmakers who helped design the AI Act argue that turning major parts of it into voluntary measures would undermine these values. They maintain that tech providers with far-reaching influence must be held accountable if their models produce disinformation or enable election interference. According to them, any weakened requirements could open the door to discriminatory outcomes or new forms of misuse.
Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot