Startups Seek Clarity and Sector-Specific AI Regulation Amidst AI Safety Summit Discussions


Most read

Loading Most Ready posts..


  • Startups at the UK AI safety summit seek regulatory clarity for sector-specific AI rules.
  • Balancing immediate and long-term AI risks is crucial for effective regulation.
  • Debates continue how to regulate the most powerful AI systems.


The recent UK AI safety summit brought together industry leaders, policymakers, and startups to discuss the challenges and risks associated with advanced AI systems. While many saw the event as a positive diplomatic step, startups building AI products now emphasize the need for regulatory clarity and sector-specific approaches to AI regulation. 

Building connections and gathering information

Connor Leahy, co-founder of AI safety startup Conjecture, pointed out that the summit primarily served as a platform for networking, information exchange, and relationship building rather than a venue for policy decisions. Founders like Leahy highlighted the importance of fostering connections within the AI community.

The call for clear rules

Startups are calling for clearer regulations in the UK to provide a stable environment for their AI ventures. Eric Topham, co-founder of Octaipipe, a London-based startup focused on data security for AI products on physical devices, emphasized the need for clarity in data security rules. He contrasted the UK’s lack of defined standards with the forthcoming Cyber Resiliency Act in Europe, which penalizes device data breaches.

However, startups like Octaipipe are cautious about adopting a one-size-fits-all approach to AI legislation. Many founders believe that sector-specific regulations are more suitable than the EU’s horizontal approach, which applies the same rules across diverse industries. Alex Kendall, co-founder of autonomous driving company Wayve, argued that different sectors face unique AI-related risks and should have tailored regulations.

Balancing immediate and long-term risks

While the summit largely focused on “catastrophic” risks associated with future super-powerful AI systems, startups are urging policymakers to address immediate and long-term challenges. Marc Warner, CEO of Faculty, emphasized the importance of addressing near-term and long-term risks simultaneously, using the analogy of caring about both seatbelts and catalytic converters in the automotive industry.

Regulating frontier AI

The regulation of the most powerful AI systems, such as those trained on vast amounts of data and computing power, remains a contentious issue in the AI sector. Some experts, like Yann LeCun from Meta, argue that these systems are not truly intelligent but sophisticated autocomplete tools and may not require strict regulation.

In contrast, startups like Conjecture propose setting maximum limits on compute power for training new models to prevent AI systems from surpassing human capabilities. They argue that uncontrolled super-intelligent AI systems could lead to machines controlling the future, displacing human control.

Mistral, a Paris-based generative AI startup, advocates for compulsory independent oversight of big tech companies’ AI models. This would allow public research institutions to study these models and ensure transparency. While some big tech players like OpenAI and GoogleDeepMind have entered into voluntary agreements for external testing, Mistral co-founder Arthur Mensch insists that legislation is necessary to prevent these companies from regulating themselves.

Mustafa Suleyman, co-founder of DeepMind and now CEO at Inflection, acknowledges that voluntary agreements are a positive first step. However, he stresses balancing regulation with companies’ intellectual property rights.

In the wake of the UK AI safety summit, startups in the AI industry are seeking regulatory clarity and advocating for sector-specific approaches to AI regulation. While the summit primarily addressed long-term risks associated with advanced AI systems, startups emphasized the need to consider immediate challenges. The debate on how to regulate the most powerful AI systems continues, with some advocating for strict limits on computing power while others argue for independent oversight of big tech companies. Despite the questions that remain, the ongoing conversation signals a positive step toward shaping the future of AI regulation. As the AI industry evolves, finding the right balance between innovation and regulation will be crucial for its growth and responsible development.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Glory Kaburu

Glory is an extremely knowledgeable journalist proficient with AI tools and research. She is passionate about AI and has authored several articles on the subject. She keeps herself abreast of the latest developments in Artificial Intelligence, Machine Learning, and Deep Learning and writes about them regularly.

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan