🔥 Trade with Pros on Discord → 21 Days Free (No Card)JOIN FREE

Britain enlists Microsoft for deepfake detection amid AI surge

In this post:

  • Britain partners Microsoft to set standards for deepfake detection tools.
  • Framework targets fraud, abuse and impersonation threats online.
  • Regulators probe AI platforms over non-consensual images.

Britain has announced plans to work with Microsoft, academics and technical experts to build a deepfake detection system, as concern grows over the scale of AI-generated deception online.

The initiative places deepfake, AI, Britain, Microsoft and detection systems at the center of a new push to curb harmful content that is becoming increasingly realistic and harder to spot.

Britain is targeting fraud and non-consensual images

According to the government, the partnership will develop a deepfake detection assessment framework, which will create a set of shared standards for evaluating circuitry detection devices for altered audio, video, and image files.

As well as providing a benchmark for these types of detection device against actual world examples of usage (fraud and impersonation) as well as images or videos of sexual exploitation of children.

Technology Minister Liz Kendall cautioned that this risk does not exist only theoretically.

“Deepfakes are being used by criminals to deceive the public, take advantage of women and girls, and decrease the credibility of what we see and hear. And will continue until we take measures to protect citizens and democratic institutions from manipulation.”

Kendall

Altering media has been around for many decades. However, experts say that with the development of AI, the amount of money and skill required to produce a high-quality forgery is more accessible than ever before.

See also  New Board Aims to Help Navy Integrate Robotics and AI Technologies into Operations

In the UK, there is an increased focus on the criminal act of producing intimate images without consent as a direct result of the rapid rise in the number of fake images produced by AI.

According to government data, there were eight million fake images produced as deepfakes in 2025 compared to only 500,000 in 2023. This shows how quickly people are creating these types of images.

The framework has been created to enable law enforcement to detect, prevent and prosecute this crime and to provide industry with a clear set of expectations concerning safety regulations.

This is a move that governments have been urged to do and Microsoft called on Congress in 2024 to pass new legislation targeting AI-generated deepfakes. Brad Smith, Vice Chair and President of Microsoft had emphasized the urgency for lawmakers to address the growing threat of deepfake technology.

In his blog post, Smith highlighted the importance of adapting laws to address deepfake fraud and prevent exploitation. According to Smith, there should be a statute that one can use to charge scams and frauds of deepfakes.

According to Microsoft’s report, several legal interventions can be taken to prevent the misuse of deepfake technology. One of the suggestions is to create a federal ‘deepfake fraud statute.’

See also  Apple faces lawsuits in US and Canada over delayed Siri features

Pressuring platforms through regulation

Around the world regulators are having difficulty keeping up with the rapid advancements of AI technology.

In the UK, both the office that regulates communications (the “Communications Regulator”) and the office that regulates privacy (the “Privacy Regulator”) have begun investigating the Grok chatbot, which is operated by Elon Musk, due to the chatbot producing sexualized images of children that were produced without their consent.

As part of this investigation, the two regulatory bodies will be working together to develop a new framework for assisting law enforcement and regulating agencies in establishing consistent standards for how to assess detection tools used by law enforcement and regulating agencies.

According to Kendall, the purpose of this new framework is “to promote the restoration of trust in what people see and hear online,” and to require that all technology providers assume responsibility for mitigating potential harm related to the accelerating use of AI technologies.

The smartest crypto minds already read our newsletter. Want in? Join them.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan