🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

The UK government is not transparent about its use of AI says Technology Secretary

In this post:

  • Several government departments are already using AI and algorithms.
  • The Home Office for instance is reportedly using an AI-enabled immigration enforcement system.
  • Rights campaigners opine departments must be open on how they use AI.

The UK government is reportedly not listing its use of AI systems on mandatory registers, raising concerns about transparency issues.

According to The Guardian, the Technology Secretary has since admitted that departments are not transparent regarding their usage of AI and algorithms despite the various government departments using it for various purposes.

UK government departments “flying blind” about their AI use

The Guardian article reveals that not one department listed the use of AI since the government indicated it would become mandatory. This has raised concerns that the public sector is “flying blind.”

This comes as government departments in the UK are already using AI, for instance, to inform decisions on everything from benefit payments to immigration enforcement. According to The Guardian, some records show that public bodies awarded contracts for AI algorithmic services.

For example, the police procurement body recently put up a contract for facial recognition worth $20 million, further raising concerns about “mass biometric surveillance.”

However, only nine algorithmic systems have been submitted to a public register, with none of the growing number of AI programs used in the welfare system by the Home Office or by the police. In February, the UK government announced that the use of AI registers would now be a “requirement for all government departments.”

When asked about the lack of transparency, the Secretary of State for Science and Technology Peter Kyle admitted that the public sector “hasn’t taken seriously enough the need to be transparent in the way that the government uses algorithms.”

“I accept that if the government is using algorithms on behalf of the public, the public has a right to know.”

Kyle.

“The public needs to feel that algorithms are there to serve them and not the other way around. The only way to do that is to be transparent about their use,” added Kyle.

See also  AI model Olympus could be Amazon's powerful video tool in 2024

The UK government ignored warnings

According to The Guardian article, experts warned about the dangers of AI if adopted uncritically, with recent examples of IT systems not working as intended including the Post Office’s Horizon software.

The use of AI technology in Whitehall spans from Microsoft’s Copilot system to automated fraud and error checks in the benefits system. Another recent AI contract notice by the Department of Work and Pensions (DWP) described “a mushrooming of interest within DWP, which mirrors that of wider government and society.”

Apart from the police, the Home Office also uses an AI-enabled immigration enforcement system. This system, which critics call a “robo-caseworker,” is involved in shaping decisions including returning people to their home countries.

The government has however described it as a “rules-based” as opposed to AI system because it does not involve machine learning from data. According to the government, this brings efficiencies although a human remains responsible for each decision.

The NHS England is also reportedly in a 330-million-pound contract with Palantir to build a huge new data platform further raising concerns about patient privacy. Palantir said patients retain control of the data.

A privacy rights campaign group Big Brother Watch highlighted that the emergence of the police facial recognition contract, despite MPs warning of lack of legislation to regulate it showed the government’s lack of transparency over the use of the AI technology.

See also  The Browser Company's DIA enters AI browser race with Microsoft Edge, Brave, and Opera

“The secretive use of AI and algorithms to impact people’s lives puts everyone’s data rights at risk,” said Madeleine Stone, chief advocacy officer.

“Government departments must be open and honest about how they use this tech.”

Stone.

Associate director at the data and AI research body, Imogen Parker also expressed concerns over the lack of transparency.

“Lack of transparency isn’t just keeping the public in the dark, it also means the public sector is flying blind in its adoption of AI.”

Parker.

“Failing to publish algorithmic transparency records is limiting the public sector’s ability to determine whether these tools work, learn from what doesn’t, and monitor the different social impacts of these tools,” added Parker.

From Zero to Web3 Pro: Your 90-Day Career Launch Plan

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan