Loading...

UK labour party calls for licensing and regulation of AI developers

TL;DR

Lucy Powell, a digital spokesperson for the UK Labour Party, has proposed that developers working on artificial intelligence (AI) should be licensed and regulated similarly to industries such as pharmaceuticals, medicine, or nuclear energy. She believes that companies like OpenAI or Google, which have created AI models, should be required to obtain licenses in order to build these models. 

Powell expressed concerns about the lack of regulation surrounding large language models, which can be applied across various AI tools. She emphasized the need for governing the development, management, and control of these models.

“My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed, or how they are controlled.”

Powell argues that regulating the development of AI technologies is a better approach than outright bans, citing the example of the European Union’s ban on facial recognition tools. She acknowledges that AI can have unintended consequences, but believes that some risks can be mitigated if developers are transparent about their AI training models and datasets. Powell suggests that an active, interventionist UK government approach is necessary due to the rapid pace of technological advancements.

The Labour Party, of which Powell is a representative, is reportedly working on its own policies regarding AI and related technologies. Party leader Keir Starmer plans to meet with the party’s shadow cabinet at Google’s UK offices to engage with AI-focused executives.

UK on the potential risks of AI

On the same day, Matt Clifford, chair of the Advanced Research and Invention Agency, the UK government’s research agency established in February, expressed concerns about the potential threats posed by AI. Clifford appeared on TalkTV and warned that AI could pose a threat to humans within as little as two years if regulations and safety measures are not put in place. He highlighted the risk of large-scale cyber attacks enabled by AI tools and emphasized the need for policymakers to prioritize this issue.

Clifford mentioned that OpenAI has pledged $1 million to support the development of AI-aided cybersecurity technologies to counter potential misuse. He stressed the importance of considering various scenarios and ensuring that AI regulation remains a top priority.

Lucy Powell and the Labour Party are advocating for the licensing and regulation of developers working on AI models. Powell emphasizes the need for regulation to govern the development, management, and control of AI tools. Matt Clifford has also voiced concerns about the potential risks associated with AI and urges policymakers to take proactive measures to ensure safety. OpenAI has shown support for addressing AI-related risks through financial contributions to cybersecurity technology development.

Share link:

Lacton Muriuki

Lacton is an experienced journalist specializing in blockchain-based technologies, including NFTs and cryptocurrency. He dabbles in daily crypto news rich with well-researched stats. He adds aesthetic appeal, adding a human face to technology.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

World Economic Forum
Cryptopolitan
Subscribe to CryptoPolitan