Rite Aid Ceases AI Facial Recognition Following FTC Settlement


  • Rite Aid stops using AI facial recognition after FTC settlement due to racial profiling allegations.
  • AI technology often replicates existing biases, unfairly targeting people of color in retail settings.
  • Experts advise caution in deploying AI surveillance and suggest testing to eliminate racial biases.

 Rite Aid, the prominent retail drugstore chain, has agreed to discontinue using AI-powered facial recognition technology as part of its shoplifting prevention measures. This decision comes in response to a legal settlement with the Federal Trade Commission (FTC) and amidst allegations of racial profiling. The AI system, employed since 2012, purportedly exhibited a bias, disproportionately targeting Black, Latino, and Asian shoppers over their white counterparts.

Racial profiling allegations lead to action

Rite Aid’s utilization of AI-powered facial recognition technology has been under scrutiny due to its alleged profiling of certain racial groups. Specifically, the system was employed to identify customers deemed likely to engage in theft, triggering match alerts for store employees when these individuals entered the premises. Disturbingly, the evidence presented in legal documents suggests that people of color were more frequently subjected to unwarranted surveillance, harassment, and public embarrassment.

The so-called “shopping while black” phenomenon, a long-standing concern, entails the unwarranted suspicion, constant monitoring, and unjust accusations of theft leveled at Black shoppers from the moment they enter stores. This issue extends to discriminatory practices such as locking Black haircare and beauty products behind glass cabinets while leaving similar items for white customers readily accessible on open shelves.

Rite Aid’s misuse of facial recognition technology underscores a disturbing case study of what some experts term “retail racism,” highlighting the problematic role AI can play in reinforcing and exacerbating racial inequalities.

The bias in AI

Rashawn Ray, a sociology professor at the University of Maryland and senior fellow at The Brookings Institution points out that AI technologies often mirror existing inequalities, as they are designed and developed in environments lacking diversity and inclusion. When the same stereotypes that permeate everyday encounters are encoded into algorithms, the result is facial recognition technology that perpetuates racial stereotypes, similar to human bias.

Implicit biases, rooted in mental shortcuts that unconsciously associate certain groups with particular characteristics or behaviors, are a significant factor in this issue. In retail environments, these biases often manifest as employees wrongly associating shoppers of color with theft trends unsupported by data. Despite statistics indicating that most shoplifters in the U.S. are white, the misuse of AI technology continues to propagate the false narrative about the majority of shoplifters.

While the theft of goods poses a substantial financial risk to businesses, Rite Aid’s reliance on AI for surveillance led to unintended consequences. Shoppers of color, who were wrongly targeted, incurred humiliation, resulting in lost income for the stores. Some communities, particularly those lacking alternative shopping options, had no choice but to patronize establishments where they faced discrimination and abuse.

AI’s role beyond retail

The implications of biased AI extend beyond the retail sector, with concerns about its impact on policing and public safety. High-profile cases, such as the wrongful arrest of a pregnant Black woman in Detroit due to facial recognition technology errors, raise doubts about the trustworthiness of AI-powered surveillance systems.

Pew Research Center data reveal that Black respondents exhibit the least trust in facial recognition technology in policing. Nearly half of Black individuals surveyed expressed concerns that officers would misuse AI-powered technologies to surveil predominantly Black and Latino neighborhoods more frequently than racially diverse residential areas.

Calls for caution and testing

Earlier this year, five U.S. senators opposed the Transportation Security Administration’s (TSA) use of facial recognition technology at U.S. airports. They cited a study revealing that Black and Asian individuals were significantly more likely to be misidentified by such technology compared to white men. As part of its FTC settlement, Rite Aid has committed to suspending its use of AI-powered surveillance systems for the next five years. Experts suggest that other retailers follow suit, conducting rigorous and repeated testing to eliminate biases in these technologies.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Brenda Kanana

Brenda Kanana is an accomplished and passionate writer specializing in the fascinating world of cryptocurrencies, Blockchain, NFT, and Artificial Intelligence (AI). With a profound understanding of blockchain technology and its implications, she is dedicated to demystifying complex concepts and delivering valuable insights to readers.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan