Top AI Models Lack Transparency, New Index Show

Top AI Models Lack Transparency, New Index Show

Most read

Loading Most Ready posts..


  • A new index shows most AI models are lacking in transparency.
  • Transparency is paramount to regulators to effectively regulate AI for consumer safety.
  • The lack of transparency could also make it difficult to root out biases, privacy issues, and other harms AI model may pose.

Researchers at Stanford’s Center for Research on Foundation Models (CRFM) have recently found that the biggest AI foundation models are lacking in transparency, and that could be a thing of concern, considering how powerful and fast the technology is spreading across the world. 

Top AI Models are Lacking in Transparency

To score the transparency of the top 10 foundation models, the researchers teamed with other experts at MIT and Princeton to develop the “Foundation Model Transparency Index.” The index evaluates 100 different aspects of transparency based on how a company builds a foundation model, how it works, and how it is used downstream.

The index ranked Meta’s Llama 2 at 54%, followed by BigScience’s BLOOMZ (53%) and OpenAI GPT-4 (48%). Google’s PaLM 2 had a transparency rating of 40%, with the least transparent model being Amazon’s Titan Text at 12%. 

“This is a pretty clear indication of how these companies compare to their competitors, and we hope will motivate them to improve their transparency,” said Rishi Bommasani, Social Lead at CRFM. 

Why Does it Matter?

Transparency in AI models is imperative to prevent certain threats to consumer protection and also guarantee that these tools are safe to use. 

The lack of transparency would make it harder for businesses to know if they can safely build applications using commercial foundation models or for academics to rely on the models for research. 

On the side of regulation, it’s a major priority. Transparency is important for rooting out bias, privacy violations, etc., and to effectively help policymakers around the world formulate rules to regulate AI models. 

“If you don’t have transparency, regulators can’t even pose the right questions, let alone take action in these areas,” Bommasani said. 

Some countries, including Australia, have already begun enforcing measures to ensure the transparency and integrity of AI models. In September, Cryptopolitan reported that the Australian government has empowered citizens with the right fo request meaningful information about how automated decisions are made.

The government demand that AI companies provide these information in a clear and comprehensible manner, ensuring that citizens can understand how AI influences their lives.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Ibiam Wayas

Ibiam is an optimistic crypto journalist. Five years from now, he sees himself establishing a unique crypto media outlet that will breach the gap between the crypto world and the general public. He loves to associate with like-minded individuals and collaborate with them on similar projects. He spends much of his time honing his writing and critical thinking skills.

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan