AI in Schools Puts LGBTQ+ Students at Risk, New Study Reveals


  • AI in schools unintentionally harms LGBTQ+ students through surveillance and content filtering.
  • Widespread confusion among stakeholders highlights the need for AI education.
  • Calls for protections to prevent discrimination and ensure responsible AI use in education.

In a recent report published by the Center for Democracy and Technology, concerns have been raised about the impact of artificial intelligence (AI) in schools on vulnerable students, particularly those who are LGBTQ+, disabled, or students of color. The study, which surveyed students, parents, and teachers, sheds light on the unintended consequences of AI-powered technologies used in educational settings.

A risk to vulnerable students

The Center for Democracy and Technology’s study found that technologies designed to block explicit adult content and identify students at risk of self-harm or harming others are inadvertently putting already vulnerable students in jeopardy. LGBTQ+ students, in particular, are more likely to face negative repercussions for their online activities through student activity monitoring compared to their straight, cisgender peers.

Outing without consent: Alarmingly, 29 percent of LGBTQ+ students reported that either themselves or another student had been outed as LGBTQ+, a potentially traumatizing event. This revelation underscores the significant privacy concerns related to AI monitoring in schools.

Disproportionate discipline: The study also discovered that 50 percent of LGBTQ+ students surveyed reported that they or another student had been disciplined for online activities, compared to 39 percent of non-LGBTQ+ students. This disproportionate disciplinary action raises questions about fairness and equity.

The prevalence of student activity monitoring

Student activity monitoring was found to be widespread in educational institutions, with nearly 88 percent of teachers reporting its use on school-provided devices. Furthermore, 40 percent of teachers indicated that their schools even monitored students’ personal devices. This surveillance extends beyond school hours, with 38 percent of teachers reporting monitoring activities outside of the school day.

Digital book ban: restricting LGBTQ+ and race-related content

The report also highlights that a third of teachers interviewed expressed concerns about AI content filters disproportionately restricting content related to LGBTQ+ topics and content exploring race. This restriction is likened to a “digital book ban” and raises concerns about censorship and access to diverse perspectives.

The call for education and awareness

One of the striking findings of the study is the “widespread confusion” among parents, students, and teachers about the role of artificial intelligence in the classroom. Many stakeholders desire more information and training on how to properly use AI technologies. Unfortunately, despite the White House’s release of a Blueprint for an AI Bill of Rights in 2022, 57 percent of teachers in the most recent survey revealed that they haven’t received substantive training in AI.

A Persistent concern

It’s important to note that this recent report echoes concerns previously raised by the Center for Democracy and Technology in their 2022 research paper, “Hidden Harms: Targeting LGBTQ+ Students.” In that study, algorithms used by schools automatically flagged words like “gay” and “lesbian” in students’ messages and documents, potentially outing them without their consent.

The need for protections

Elizabeth Laird, director of equity in civic technology for the Center for Democracy and Technology, emphasized that certain groups of students should already be protected by existing civil rights laws. Yet, they continue to experience disproportionate and negative consequences due to the use of education data and technology. There is a clear need for schools to be more intentional about how they use these technologies to prevent discrimination and other harms.

Advocating for change

Civil rights groups, including the ACLU, the American Association of School Librarians, American Library Association, Disability Rights in Education Defense Fund, and the Electronic Frontier Foundation, have signed a letter supporting education-related protections in light of the growing prominence of generative AI in schools. The importance of ensuring that AI technologies are used responsibly and do not harm marginalized students cannot be overstated.

The integration of AI in schools offers numerous benefits, but it also raises significant concerns, especially for vulnerable students. It is imperative that educational institutions, policymakers, and technology providers work together to address these issues and ensure that AI technologies are employed in ways that promote equity, inclusivity, and the well-being of all students.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

John Palmer

John Palmer is an enthusiastic crypto writer with an interest in Bitcoin, Blockchain, and technical analysis. With a focus on daily market analysis, his research helps traders and investors alike. His particular interest in digital wallets and blockchain aids his audience.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

AI safety
Subscribe to CryptoPolitan