Two US senators, Richard Blumenthal, and Josh Hawley, have raised concerns about Meta chief executive Mark Zuckerberg and the tech giant’s “leaked” artificial intelligence model called LLaMA. The senators argue that LLaMA poses potential dangers and could be exploited for criminal activities.
In a letter dated June 6, the senators criticized Zuckerberg’s decision to open source LLaMA and claimed that there were insufficient safeguards in Meta’s release of the AI model. While they acknowledged the benefits of open-source software, they contended that Meta’s release of LLaMA lacked thorough consideration of the potential consequences, which they deemed a disservice to the public.
Initially, LLaMA had a limited online release to researchers. However, it was later leaked in its entirety by a user from the image board site 4chan in late February. The senators expressed alarm that the full model became readily available on BitTorrent, without any monitoring or oversight, accessible to anyone worldwide. Blumenthal and Hawley asserted that LLaMA could be easily adopted by spammers, cyber criminals, and individuals involved in fraudulent activities or the distribution of objectionable content.
US senators concerns
To highlight their concerns, the senators contrasted LLaMA with two closed-source models: OpenAI’s ChatGPT-4 and Google’s Bard. They noted that while ChatGPT-4, when asked to generate a note pretending to be someone’s son asking for money, would deny the request based on ethical guidelines, LLaMA would produce the requested letter as well as responses involving self-harm, crime, and antisemitism. This distinction raised concerns about LLaMA’s potential for generating abusive material.
While ChatGPT is programmed to reject certain requests, users have found ways to “jailbreak” the model and make it generate responses it would typically refuse. The senators questioned Zuckerberg about whether any risk assessments were conducted prior to LLaMA’s release, what measures Meta has taken to prevent or mitigate harm since the leak, and how Meta utilizes user data for AI research, among other inquiries.
It is worth noting that OpenAI, under pressure from advancements made by other open-source models, is reportedly working on its own open-source AI model. A leaked document authored by a senior software engineer at Google highlighted the progress made by these open-source models.
Open-sourcing the code for an AI model allows for customization to serve specific purposes and facilitates contributions from other developers. However, the concerns raised by Senators Blumenthal and Hawley underscore the need for responsible and thoughtful approaches to the release and monitoring of AI models to prevent potential misuse and harm.