The European Union plans to pump 1.4 billion euros into artificial intelligence (AI) and deep tech research next year.
It’s Europe’s attempt to catch up with tech giants like the United States and China. The funding, an increase of nearly 200 million euros from last year, will flow from the European Innovation Council (EIC) under the EU’s Horizon Europe program.
According to EU officials, this funding aims to jumpstart Europe’s economy by focusing on tech growth, especially AI. EU Commissioner Iliana Ivanova explained it like this:
“The European Innovation Council has emerged as a gamechanger in EU support to breakthrough innovation. In 2025, it will boost EU deep tech with even more resources, amounting to 1.4 billion euros from Horizon Europe, our research and innovation program.”
Europe’s AI lags behind
During a visit to Copenhagen, Nvidia’s CEO Jensen Huang bluntly stated that Europe is trailing in AI investments compared to the U.S. and China. He stressed, “The EU has to accelerate the progress in AI. There’s an awakening in every country realizing that the data is a national resource.”
Nvidia, the powerhouse behind many AI advancements, including OpenAI’s ChatGPT, stands as the world’s leading GPU maker, offering hardware critical to AI applications.
Huang was in Denmark for the launch of Gefion, a supercomputer featuring 1,528 GPUs, built by Nvidia with the Novo Nordisk Foundation and Denmark’s Export and Investment Fund. Denmark plans to use this powerful setup to drive research in drug discovery, disease diagnosis, and complex life sciences.
“The era of computer-aided drug discovery must be within this decade,” he added. Nvidia’s massive role in AI hardware underscores Europe’s reliance on non-European tech, a dependency the EU aims to reduce by boosting its own AI capabilities.
Few European companies, such as France’s Mistral and Germany’s Aleph Alpha, are attempting to break into the AI market. Europe also has the world’s first set of AI regulations, known as the EU AI Act, to govern AI applications. This legislation took effect in August and will fully kick in by August 2026.
What the Act is all about
The EU AI Act introduces a regulatory framework for AI systems, implementing a risk-based approach. Applications will be classified based on their potential impacts on safety, human rights, and societal welfare.
“Some systems are banned entirely, while systems deemed ‘high-risk’ are subject to stricter requirements and assessments before deployment,” they explained.
Under this legislation, all businesses operating within the EU that deploy or develop AI are classified into several categories, such as Providers, Deployers, Distributors, Importers, Product Manufacturers, and Authorized Representatives.
The act has an extra-territorial reach, meaning it applies to any company dealing with AI within the EU, regardless of where it’s headquartered. Compliance for high-risk applications will be rigorous, with companies required to conduct assessments and follow strict documentation practices.
These regulations closely align with GDPR standards, stressing transparency, accountability, and ethical use. To meet these requirements, organizations will need to implement staff training, robust governance, and cybersecurity protocols.
The EU has started developing specific codes of practice and templates to assist companies in meeting these compliance standards.
Experts recommend that businesses unsure about their responsibilities seek professional guidance and use tools like the EU AI Act Compliance Checker to verify alignment with these rules.
Despite these heavy regulations, some argue that the EU AI Act could push European companies to innovate more responsibly, potentially offering a competitive advantage in the long run.
Cryptopolitan Academy: Are You Making These Web3 Resume Mistakes? - Find Out Here