Loading...

Deepfake of President Biden Urging Voter Suppression Triggers Investigation

TL;DR

  • A deepfake audio of President Biden discouraging voting triggers an investigation, leading to the suspension of its creator by ElevenLabs, the tech behind it.
  • The incident highlights concerns about deepfake misuse in elections, with New Hampshire authorities taking it seriously and experts warning of future problems.
  • Detection challenges persist as different tools point to ElevenLabs’ involvement, emphasizing the need for responsible AI technology use.

An alarming deepfake audio featuring US President Joe Biden urging people not to vote in the New Hampshire primary has sparked a swift response from both the private sector and law enforcement agencies. The creator of this audacious deepfake has been suspended by ElevenLabs, a startup specializing in artificial intelligence (AI) software for voice replication. This incident has raised serious concerns about the potential misuse of AI-driven deepfake technology to disrupt elections and suppress voter turnout.

ElevenLabs, the technology provider behind the creation of the deepfake audio, has taken immediate action by suspending the user responsible for generating the malicious content. Pindrop Security Inc., a voice-fraud detection company, confirmed that ElevenLabs’ technology was used to produce the deepfake.

Upon learning of Pindrop’s findings, ElevenLabs launched an internal investigation into the incident. The startup, which recently secured an impressive $80 million in financing from investors including Andreessen Horowitz and Sequoia Capital, now faces questions about the potential misuse of its AI-driven voice-cloning technology.

Alarming implications for election security

The deepfake robocall featuring President Biden has raised significant concerns among disinformation experts and elections officials. Not only does this incident underscore the relative ease with which audio deepfakes can be created, but it also highlights the potential for bad actors to leverage this technology to deter voters from participating in elections.

The New Hampshire Attorney General’s office has launched an official investigation into the deepfake incident, considering it an unlawful attempt to disrupt the presidential primary election and suppress New Hampshire voters. This development underscores the seriousness with which authorities are treating the potential misuse of deepfake technology in the political arena.

Detection challenges

Efforts to identify the technology behind the deepfake have encountered challenges. While ElevenLabs’ own “speech classifier” tool indicated a low likelihood of synthetic audio (2%), other deepfake detection tools confirmed the audio as a deepfake but couldn’t pinpoint the exact technology used.

Pindrop Security Inc. took a deep dive into the audio, cleaning it by removing background noise, silences, and breaking it into segments for analysis. Their research compared the audio to a database of samples collected from over 100 text-to-speech systems commonly used for deepfake production. Their conclusion strongly suggests that the deepfake was created using ElevenLabs’ technology.

Siwei Lyu, a professor at the University of Buffalo specializing in deepfakes and digital media forensics, also conducted an analysis. He ran the deepfake through ElevenLabs’ classifier, concluding that the software was likely used, emphasizing the tool’s widespread use in the field.

Concerns for the future

Experts warn that incidents like this are likely to increase as the general election approaches. The ability to use deepfake technology to create highly personalized and deceptive content is a growing concern for election security.

While ElevenLabs boasts that the majority of its use cases are positive, with a team devoted to content moderation, this incident highlights the potential for misuse that platforms like ElevenLabs must address. Ensuring that AI-driven tools are not used for malicious purposes remains a significant challenge.

ElevenLabs’ swift response in suspending the creator and launching an internal investigation highlights the seriousness of the situation. As authorities dig deeper into the incident, it becomes clear that the misuse of AI-driven deepfake technology in the political realm is a pressing concern, with the potential to undermine the integrity of elections. The investigation will undoubtedly shed more light on the origins and consequences of this alarming episode, while also prompting a broader discussion about the responsible use of deepfake technology in the digital age.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

 

Share link:

Brenda Kanana

Brenda Kanana is an accomplished and passionate writer specializing in the fascinating world of cryptocurrencies, Blockchain, NFT, and Artificial Intelligence (AI). With a profound understanding of blockchain technology and its implications, she is dedicated to demystifying complex concepts and delivering valuable insights to readers.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Chatbot
Cryptopolitan
Subscribe to CryptoPolitan