As the world gears up for a wave of national elections in 2024, concerns are mounting about the role of artificial intelligence (AI) in shaping the electoral landscape. Social media platforms, once essential tools for voter engagement, face criticism for their diminished content moderation efforts, potentially allowing misinformation and hate speech to run rampant. Moreover, the emergence of AI-driven generative tools raises new concerns about disseminating realistic deepfakes and targeted disinformation campaigns.
Diminished content moderation
Major social media platforms, including Twitter (now X) and Meta (owner of Facebook, Instagram, and WhatsApp), have witnessed a reduction in content moderators since late 2022. This downsizing, partly attributed to corporate restructuring, has raised questions about their preparedness for the upcoming elections. Misinformation and hate speech thrive during election periods, posing a significant threat to democratic processes worldwide.
Kaleidoscope of uncertainty
Katie Harbath, an election integrity expert, compares the challenges of combating election-related misinformation to a kaleidoscope, emphasizing the complex and ever-changing nature of the problem. While social media companies claim to be committed to election integrity, the allocation of resources tells a different story. Many believe that these platforms prioritize Western democracies, leaving others underserved.
AI’s role in misinformation
AI’s role in misinformation extends beyond text-based content. Generative AI tools, capable of creating images, audio, and video, have the potential to fuel the spread of highly realistic deepfakes. This technology makes it easier for malicious actors to tailor disinformation to specific audiences, exploiting their vulnerabilities and beliefs.
Guardrails and regulations
To address these challenges, various initiatives have been launched. Meta, TikTok, Microsoft, and YouTube have imposed disclosure requirements on creators and political advertisers who use AI-generated content. Governments and international organizations have also stepped in with regulatory frameworks, including the Biden administration’s executive order on AI, the AI Safety Summit in the United Kingdom, the United Nations’ AI advisory board, and the European Union’s AI Act, expected to take effect in 2025.
Alondra Nelson, a prominent figure in AI regulation efforts, expresses cautious optimism regarding these initiatives. She notes that while progress is being made, it is still in its early stages. Policymakers, industry stakeholders, and civil society face the challenge of reaching a consensus on what constitutes harmful AI-driven content and how to regulate it effectively.
The upcoming 2024 elections in over 50 countries, including major democracies like the United States and India, underscore the urgency of addressing AI’s impact on electoral processes. Diminished content moderation on social media platforms and the proliferation of AI-driven generative tools pose significant challenges to election integrity. While regulatory efforts are underway, whether they can keep pace with the rapidly evolving landscape of AI-driven misinformation remains to be seen. As the world watches, the kaleidoscope of uncertainty turns, leaving the future of election security hanging in the balance.