- UK’s election watchdog lacks power to combat AI-generated deepfakes, posing a threat to democratic processes.
- Recent deepfake incidents targeting political figures raise concerns about misinformation in elections.
- Tech giants like TikTok and Google are implementing measures to address deepfake content, but challenges persist.
The UK’s election watchdog has sounded an alarm, expressing its inability to combat ‘deepfake’ content, thereby leaving Britain susceptible to the disruptive influence of AI-generated fake videos of politicians in the upcoming elections.
The UK’s election watchdog has issued a stark warning, revealing its lack of authority in dealing with the growing threat of ‘deepfake’ content. This unsettling revelation exposes the nation’s vulnerability to the potential manipulation of its electoral processes through the use of AI-generated fake videos.
Artificial intelligence experts have cautioned that the next elections in the UK could be swayed by convincing AI-generated ‘deepfake’ audio and video of political figures. This technology poses a significant risk to the integrity of democratic processes.
Recent deepfake incidents
In recent months, there have been notable instances of convincing AI-generated deepfake content targeting political figures, specifically within the Labour Party. Early in October, a Twitter/X account posted what appeared to be audio of Labour leader Sir Keir Starmer making derogatory remarks about his party’s staffers. This was followed by a deepfake audio clip of London Labour Mayor Sadiq Khan seemingly showing disrespect for Remembrance commemorations.
The legal status of deepfake videos and audio involving political figures remains uncertain. While it may potentially fall under the new malicious communications measures in the Online Safety Act, authorities have faced challenges in addressing such content. Notably, the Metropolitan Police dropped an investigation into the Sadiq Khan deepfake, citing that it did not constitute a criminal offense
Experts express concerns that AI-generated fake videos of politicians could go viral in the lead-up to the next election. The ease and low cost of producing convincing fake videos of public figures saying or doing scandalous things have made such interference increasingly accessible. Many experts in the field anticipate that hostile state actors may exploit emerging technology to influence UK elections and disrupt the democratic process.
As of November 2023, campaign materials are required to include an “imprint” indicating the publisher of certain political content online. However, there is no obligation to disclose whether the content is AI-generated, and the Electoral Commission lacks the authority to sanction or remove misinformation. While Ofcom possesses more regulatory powers in this area, the legal framework concerning AI-generated political misinformation and deepfakes remains ambiguous.
The Electoral Commission clarified its role in addressing deepfakes and campaign material content, stating that it is primarily responsible for regulating party and campaigner finance and ensuring compliance with digital imprint requirements. While the Commission acknowledges the challenges posed by AI-generated content, it does not possess the jurisdiction to combat deepfake-related issues directly.
The Commission is actively collaborating with partners and other regulatory bodies to gain a better understanding of the opportunities and challenges presented by AI technology in elections. It also encourages voters to critically evaluate online information and offers guidance on identifying fake news while directing concerns to relevant regulators.
Calls for strengthened regulatory powers
Recognizing the limitations of current regulations, the Electoral Commission is advocating for the UK government to enhance the authority of UK regulators, including the Commission itself. Specifically, the Commission seeks greater powers to obtain information from social media platforms, technology companies, and online payment providers. This initiative aims to identify the sources of funding for political misinformation campaigns, addressing a critical aspect of the issue.
Tech giants’ responses
Byline Times reached out to several tech giants to inquire about their policies and measures to combat harmful political deepfakes. TikTok, for instance, stated that it prohibits all political advertising and requires clear disclosure of synthetic media or manipulated content that presents realistic scenes. TikTok also maintains strict policies against misinformation.
Google/Alphabet emphasized its ongoing efforts to combat misinformation and its experimentation with watermarking technology to indicate AI-generated content. Google announced SynthID, a tool for embedding imperceptible digital watermarks into AI-generated images to verify their authenticity.
Google also highlighted its progress in detecting synthetic speech with nearly 99% accuracy and revealed plans to update its Political Content policy. This update will require verified election advertisers to prominently disclose synthetic content in their ads, covering images, videos, and audio.
YouTube, owned by Google, has strict policies against “technically manipulated content” that misleads viewers and poses a risk of egregious harm. YouTube will also require creators to disclose when they create altered or synthetic content that appears realistic.
Challenges and risks of AI-generated misinformation
The primary risk associated with AI-generated deepfake content is the creation of convincing videos that depict events that never occurred or individuals saying and doing things they never did. This risk becomes particularly sensitive in the context of election campaigns, where public opinion can be swayed by such misleading content.
Meta/Facebook and Twitter’s responses
Meta/Facebook did not provide a response to inquiries regarding their policies and measures to address political deepfakes. Twitter/X responded with an automated response, offering no further details on their stance or actions regarding the issue.
As the threat of AI-generated deepfakes looms over the UK’s democratic processes, the nation faces a critical challenge in effectively regulating and combatting the spread of this deceptive content. The involvement of tech giants in implementing measures to counter deepfakes underscores the urgency of addressing this issue to protect the integrity of elections and democratic discourse.
Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.