UNESCO Report: AI Continues to Amplify Gender Bias


  • UNESCO report shows AI still has a gender bias, especially in healthcare and finance.
  • AI models often produce sexist content, impacting gender equality efforts.
  • We need diverse data and transparent practices in AI development to fix bias.

A recent UNESCO report has shed light on the persistent gender bias perpetuated by contemporary Artificial Intelligence (AI) models. Despite ongoing efforts to mitigate prejudice, the study found that AI systems, including large language models (LLMs), are prone to amplifying societal biases, particularly around gender.

Gender bias in AI models

The report revealed that popular LLMs often generate sexist and misogynistic content, reflecting societal prejudices. For instance, when prompted with sentences mentioning gender and sexual identity, Meta’s open-source Llama 2 model produced outputs containing sexist or misogynistic language in 20% of cases. While some AI models, like ChatGPT, exhibited better behavior, biases were still present even after fine-tuning.

UNESCO warned that if left unaddressed, algorithmic bias could become more deeply embedded in critical sectors such as healthcare and finance. Biased AI algorithms could exacerbate existing gender disparities in these fields, hindering efforts to achieve gender equality.

One of the significant concerns highlighted in the report is the impact of biased medical data on AI-powered healthcare systems. Data collection practices have historically favored male subjects, leading to gender gaps in AI training data. This bias has tangible consequences, as demonstrated by a study of AI tools used to screen for liver disease, which missed 44% of cases in women due to biased training data skewed towards men.

Addressing the imbalance

Sandy Carter, COO of Unstoppable Domains, emphasized the importance of addressing the gender gap in AI training data. She advocated for increased data transparency to highlight gender skews and proposed novel approaches such as crowd-sourcing women’s health data or generating synthetic data to mitigate discrepancies.

Carter underscored the necessity of fair representation in training data for developing equitable AI systems. By incorporating diverse data sources and embracing transparency in data collection practices, developers can work towards minimizing biases in AI models.

UNESCO report serves as a stark reminder of the ongoing challenge posed by gender bias in AI. To build AI systems that serve all users equitably, concerted efforts are needed to address biases at every stage of development, from data collection to model deployment.

By raising awareness of these issues and advocating for inclusive practices, stakeholders can work towards realizing the potential of AI to advance gender equality in healthcare and beyond. Only through collaborative action and a commitment to fairness can the promise of AI technology be fully realized for all individuals, regardless of gender.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

James Kinoti

A crypto enthusiast, James finds pleasure in sharing knowledge on fintech, cryptocurrency as well as blockchain and frontier technologies. The latest innovations in the crypto industry, crypto gaming, AI, blockchain technology, and other technologies are his preoccupation. His mission: be on track with transformative applications in various industries.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan