European Union (EU) officials advocate for additional measures to promote transparency in artificial intelligence (AI) tools, including OpenAI’s ChatGPT, to tackle the spread of disinformation. Vera Jourova, the vice president for values and transparency at the European Commission, emphasized the need for companies deploying generative AI tools to label their content and implement safeguards against disseminating AI-generated disinformation.
Jourova called for signatories, including major tech companies such as Microsoft and Google, to recognize the potential of generative AI to generate disinformation and take steps to label such content clearly. This move empowers users to differentiate between genuine and potentially misleading information. The EU’s existing “Code of Practice on Disinformation,” established in 2018, serves as a self-regulatory standard for the tech industry to combat disinformation. Several prominent tech companies, including Google, Microsoft, and Meta Platforms, have signed onto the EU’s code and will be expected to report on their new safeguards for AI-generated content this July.
However, Jourova pointed out Twitter’s recent withdrawal from the code and warned that the company’s actions and compliance with EU law would be closely scrutinized. Also, the vice president stressed the importance of Twitter adhering to regulatory standards and facing rigorous assessment.
These discussions on transparency and labeling of AI-generated content are part of the EU’s broader efforts to regulate the use of AI technology. The forthcoming EU Artificial Intelligence Act aims to establish comprehensive guidelines for the public use of AI and the companies utilizing it. While the official laws are expected to be implemented in the next two to three years, EU officials have encouraged companies to adopt a voluntary code of conduct for generative AI developers in the interim.
Addressing concerns and safeguarding against AI-generated disinformation
As the popularity of generative AI tools like ChatGPT and Bard continues to rise, concerns about potential misuse and the spread of disinformation have emerged. European Commission deputy head Vera Jourova emphasized the importance of companies labeling AI-generated content to combat the dissemination of fake news.
Jourova highlighted Microsoft-backed OpenAI’s ChatGPT and Google’s Bard as generative AI tools that should incorporate necessary safeguards to prevent malicious actors from utilizing them to generate disinformation. By implementing technology to recognize and label AI-generated content, these companies can enhance transparency and empower users to make informed judgments about the information they consume.
In line with the EU Code of Practice on Disinformation, companies that have signed up, including Google, Microsoft, and Meta Platforms, are expected to report on the safeguards they have put in place to combat AI-generated disinformation. Jourova also cautioned Twitter, which recently withdrew from the code, to anticipate increased regulatory scrutiny and emphasized the need for the company to comply with EU law.
The EU’s focus on addressing AI-generated disinformation aligns with its broader efforts to establish guidelines for the responsible use of AI. The forthcoming EU Artificial Intelligence Act will provide comprehensive guidelines for the public use of AI and the companies deploying it. In the meantime, European officials are urging companies to take proactive measures and adopt a voluntary code of conduct to ensure the responsible development and deployment of generative AI technology.