The AI Act – Navigating the Grey Areas of Deregulation


  • EU reaches provisional agreement on AI Act after trilogue negotiations addressing 21 issues, but potential loopholes raise concerns for corporate scrutiny.
  • AI Act introduces risk-based approach, prohibitions, transparency for foundation models, fundamental-rights impact assessment, fines, and AI Office and Board. Yet, worries persist about exemptions compromising protective measures.
  • Officials assert a balance between innovation and protection, but critics contend AI Act propels EU startups to global AI leadership, suggesting a concealed agenda of deregulation.

In a move late on Friday, the European Union institutions achieved a provisional agreement on the much-anticipated AI Act after intense trilogue negotiations. Yet, amidst the celebration of this significant step forward, concerns arise regarding potential loopholes that could transform this regulatory milestone into a disguise for deregulation.

The AI Act, aimed at establishing a framework for the responsible development and use of artificial intelligence, is expected to strike a balance between innovation and societal protection. Yet, the intricacies of the agreement, particularly emerging from the last-minute negotiations, raise questions about the true intentions behind the seemingly comprehensive legislation.

Key components of the AI Act

The core principle of a risk-based approach, originally proposed by the European Commission, remains intact. ‘High-risk’ AI systems, posing threats to health, safety, fundamental rights, environment, democracy, and the rule of law, are subject to specific requirements. But, the addition of filtering conditions raises concerns about potential loopholes that might exempt certain high-risk applications.

The AI Act bans AI systems causing unacceptable risks, including those manipulating human behavior, exploiting vulnerabilities, and engaging in untargeted facial image scraping. While the prohibition is comprehensive, exceptions for safety reasons, such as emotion recognition in the workplace, introduce ambiguity and potential loopholes.

Transparency obligations for foundation models, especially ‘high-impact’ ones with systemic risk, are introduced. Stricter rules demand model evaluations, systemic risk assessments, adversarial testing, cybersecurity measures, and reporting of incidents. Yet, reliance on codes of practice until harmonized EU standards are established may create uncertainties in compliance.

The inclusion of a fundamental-rights impact assessment (FRIA) addresses concerns about discriminatory state surveillance and AI in law enforcement. Nevertheless, exemptions for law-enforcement agencies raise questions about the effectiveness of the FRIA in preventing harmful deployment of high-risk AI tools.

Violations of the AI Act will incur fines ranging from €35 million or 7% of global annual turnover to €7.5 million or 1.5% of turnover for lesser infringements. While the fines aim to enforce compliance, concerns are raised about potential bypassing through ‘proportionate caps,’ favoring larger companies and encouraging them to delegate risky AI projects to startups.

The establishment of an AI Office and Board forms the governance structure. The office within the commission oversees advanced AI models, while the board, composed of member states’ representatives, acts as a coordination platform. The limited inclusion of societal stakeholders as technical experts might diminish the voice of civil society in shaping AI regulations.

Unexplored gaps and concerns in the AI Act

Despite assertions from key figures like Thierry Breton and others expressing confidence in the balance between innovation and protection, concerns about potential loopholes in the AI Act persist. At a late-night news conference on Friday, officials acknowledged that filter conditions and exceptions might allow high-risk applications to escape scrutiny. Issues such as the ban on biometric-identification systems and the deployment of high-risk AI tools in urgent circumstances raise doubts about the law’s efficacy.

Thierry Breton’s affirmation that the AI Act transcends being a mere rule book, functioning as a platform for EU startups and researchers to take the lead in the global AI race, implies an inherent ambition to establish the EU as a prominent force in the field of artificial intelligence. Critics argue that this reflects a deregulatory approach, prioritizing the support and advancement of EU-based AI companies over stringent regulations.

As the AI Act moves closer to becoming a reality, the debate surrounding its effectiveness and potential loopholes intensifies. Will the legislation truly strike a balance between fostering innovation and safeguarding society, or is it a carefully disguised deregulation strategy? The coming months will unveil the true impact of the AI Act on the AI landscape within the European Union and beyond.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Aamir Sheikh

Amir is a media, marketing and content professional working in the digital industry. A veteran in content production Amir is now an enthusiastic cryptocurrency proponent, analyst and writer.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan