In recent times, the use of artificial intelligence (AI) to create fake images for political gain has come under the spotlight. With the 2024 U.S. Presidential Election on the horizon, supporters of Donald Trump have been found using AI-generated images to falsely depict black voters as Trump supporters. This strategy aims to sway African American votes towards the Republican party.
Deepfakes enter politics
BBC Panorama’s investigation has unveiled the creation and spread of numerous deepfakes that show black individuals endorsing Trump. These images are crafted with the help of AI, making them look real at first glance. However, upon closer inspection, signs like overly shiny skin and missing fingers reveal their artificial origin.
Trump has actively sought to win over black voters, a group that played a pivotal role in Joe Biden’s victory in 2020. While there’s no direct link to Trump’s campaign, the fake visuals are seen as part of a larger effort to paint Trump as popular among black communities.
A home-grown trend
Unlike the 2016 elections, where foreign interference was a major concern, the AI fakes identified by the BBC seem to originate from within the U.S., made by American voters. Mark Kaye, a conservative radio host in Florida, admitted to creating and sharing one such image on Facebook, where he has a following of over a million.
Kaye’s image, portraying Trump with black supporters, was designed to accompany an article on black voter support for Trump. Despite its inaccuracy, Kaye defends his actions by labeling himself a storyteller rather than a photojournalist.
Social media’s double-edged sword
The spread of these AI images on social media platforms has sparked a debate on the responsibility of content creators and the gullibility of users. Some images gain traction and are believed to be real, influencing public opinion based on falsehoods. This issue highlights the challenges social media platforms face in combating AI-generated misinformation.
The impact on black voters
The targeting of black voters with such tactics is not new. Cliff Albright, co-founder of Black Voters Matter, notes a resurgence of disinformation aimed at black communities, particularly younger voters. These efforts are strategic, intending to chip away at the support base of Democratic candidates by portraying a false narrative of Trump’s popularity among black voters.
A shift in tactics
The political landscape has evolved significantly since Trump’s win in 2016. From foreign attempts to influence elections via fake accounts to domestically produced false narratives claiming election fraud in 2020, the introduction of AI-generated content adds a new layer of complexity to the disinformation ecosystem.
Experts warn of a dangerous blend of domestic and foreign manipulations in the 2024 elections. Ben Nimmo, a former Meta executive, emphasizes the need for vigilance among influencers and social media users to prevent unwittingly spreading misleading content.
Social media companies response
Major social media platforms have implemented policies to counter influence operations, including measures to deal with AI-generated content. However, the effectiveness of these policies is yet to be fully seen, as political partisans and provocateurs continue to find new ways to exploit these platforms for their agendas.
As the U.S. gears up for another election year, the challenge of distinguishing real from fake has never been more critical. The use of AI in creating deepfakes represents a significant threat to the integrity of the electoral process. It calls for a concerted effort from individuals, social media companies, and regulators to ensure that the democratic process is not undermined by advanced technological manipulations.