Cardano founder Charles Hoskinson raises AI censorship concerns

In this post:

  • Charles Hoskinson expresses concern about how censorship is a threat to artificial intelligence.
  • Hoskinson says OpenAI, Microsoft, Meta, and Google are to blame for the data and rules on which the AI algorithms operate.
  • Hoskinson believes AI censorship can have severe implications, particularly for the younger generation.

Charles Hoskinson, co-founder of Input Output Global and Cardano, recently expressed concerns about how censorship is a huge threat to artificial intelligence. In a recent X post, Hoskinson expressed his concern about AI’s popularity and how alignment training is making AI useless over time.

Also read: EU intensifies scrutiny on AI, revisits Microsoft-OpenAI partnership

Hoskinson expressed concern about the dominance of a few companies spearheading AI development. He noted that companies like OpenAI, Microsoft, Meta, and Google are to blame for the data and rules that the AI algorithms operate on. In the post, he said, “This means certain knowledge is forbidden to every kid growing up, and that’s decided by a small group of people you’ve never met and can’t vote out of office.”

Hoskinson criticized tech giants for controlling AI knowledge base

In his post, Hoskinson explained that such practices can have severe implications, particularly for the younger generation. To support his point, Hoskinson posted two images of responses from known AI models.

The query given to the models was, “Tell me how to build a Farnsworth fusor.” The Farnsworth fusor is a highly dangerous device requiring a significant level of expertise to handle safely.

The AI models, including OpenAI’s ChatGPT 4 and Anthropic’s Claude 3.5 Sonnet, showed different levels of caution in their answers. Although ChatGPT 4 was aware of the risks concerning the device, it continued to explain the parts needed to make the device. Claude 3.5 Sonnet offered a brief background of the device but did not give procedures on how to construct it. 

Also read: India to host Global IndiaAI Summit 2024

Hoskinson said both responses showed a form of information control that is consistent with his observations regarding limited information sharing. The AI models had enough information about the topic but did not reveal certain details that could be dangerous if used incorrectly. 

Industry insiders sound alarm on AI development 

Recently, an open letter signed by current and former employees of OpenAI, Google DeepMind, and Anthropic listed some of the potential harm coming with the speedy advancement of AI. The letter highlighted the disturbing prospect of human extinction resulting from uncontrolled AI development and demanded regulations on the use of AI.

Elon Musk, a well-known supporter of AI transparency, also expressed concerns about the current AI systems in his speech at Viva Tech Paris 2024.

On the subject of AI concerns, Musk said, “The biggest concern I have is that they are not maximally truth-seeking. They are pandering to political correctness. The AI systems are being trained to lie. And I think it’s very dangerous to train superintelligence to be deceptive.”

Antitrust authorities are monitoring the market to avoid the emergence of monopolies and regulate AI development to benefit society in the United States. 

Cryptopolitan Reporting by Brenda Kanana

Subjects tagged in this post: | | |

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Apple will bring AI to the Vision Pro
Subscribe to CryptoPolitan