Loading...

EU officials push for transparency in AI-generated content to combat disinformation

TL;DR

  • European Union officials are advocating for transparency in AI-generated content to combat disinformation.
  • Companies deploying generative AI tools, such as ChatGPT and Bard, should label their content and implement safeguards against the spread of disinformation.
  • EU tech industry signatories, including Google and Microsoft, are expected to report on their safeguards for AI-generated content, while Twitter’s actions will face increased regulatory scrutiny.

European Union (EU) officials advocate for additional measures to promote transparency in artificial intelligence (AI) tools, including OpenAI’s ChatGPT, to tackle the spread of disinformation. Vera Jourova, the vice president for values and transparency at the European Commission, emphasized the need for companies deploying generative AI tools to label their content and implement safeguards against disseminating AI-generated disinformation.

Jourova called for signatories, including major tech companies such as Microsoft and Google, to recognize the potential of generative AI to generate disinformation and take steps to label such content clearly. This move empowers users to differentiate between genuine and potentially misleading information. The EU’s existing “Code of Practice on Disinformation,” established in 2018, serves as a self-regulatory standard for the tech industry to combat disinformation. Several prominent tech companies, including Google, Microsoft, and Meta Platforms, have signed onto the EU’s code and will be expected to report on their new safeguards for AI-generated content this July.

However, Jourova pointed out Twitter’s recent withdrawal from the code and warned that the company’s actions and compliance with EU law would be closely scrutinized. Also, the vice president stressed the importance of Twitter adhering to regulatory standards and facing rigorous assessment.

These discussions on transparency and labeling of AI-generated content are part of the EU’s broader efforts to regulate the use of AI technology. The forthcoming EU Artificial Intelligence Act aims to establish comprehensive guidelines for the public use of AI and the companies utilizing it. While the official laws are expected to be implemented in the next two to three years, EU officials have encouraged companies to adopt a voluntary code of conduct for generative AI developers in the interim.

Addressing concerns and safeguarding against AI-generated disinformation

As the popularity of generative AI tools like ChatGPT and Bard continues to rise, concerns about potential misuse and the spread of disinformation have emerged. European Commission deputy head Vera Jourova emphasized the importance of companies labeling AI-generated content to combat the dissemination of fake news.

Jourova highlighted Microsoft-backed OpenAI’s ChatGPT and Google’s Bard as generative AI tools that should incorporate necessary safeguards to prevent malicious actors from utilizing them to generate disinformation. By implementing technology to recognize and label AI-generated content, these companies can enhance transparency and empower users to make informed judgments about the information they consume.

In line with the EU Code of Practice on Disinformation, companies that have signed up, including Google, Microsoft, and Meta Platforms, are expected to report on the safeguards they have put in place to combat AI-generated disinformation. Jourova also cautioned Twitter, which recently withdrew from the code, to anticipate increased regulatory scrutiny and emphasized the need for the company to comply with EU law.

The EU’s focus on addressing AI-generated disinformation aligns with its broader efforts to establish guidelines for the responsible use of AI. The forthcoming EU Artificial Intelligence Act will provide comprehensive guidelines for the public use of AI and the companies deploying it. In the meantime, European officials are urging companies to take proactive measures and adopt a voluntary code of conduct to ensure the responsible development and deployment of generative AI technology.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Damilola Lawrence

Damilola is a crypto enthusiast, content writer, and journalist. When he is not writing, he spends most of his time reading and keeping tabs on exciting projects in the blockchain space. He also studies the ramifications of Web3 and blockchain development to have a stake in the future economy.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Pakistan
Cryptopolitan
Subscribe to CryptoPolitan