Taming AI Discrimination The Anthropic Way: Persuade It

Taming AI Discrimination The Anthropic Way: Persuade It

Most read

Loading Most Ready posts..


  • Discrimination is one major concern that needs to be addressed in AI algorithms.
  • AI discrimination stems from various causes, including data, human error, and design.
  • Anthropic recently conducted a test, sharing new prompting tactics to prevent AI discrimination.

The potentials of AI technology are sounded very often across several media outlets, but nestled amongst the benefits lurks a dark side where the lines between progress and prejudice become dangerously blurred.

AI discrimination is an insidious issue that threatens to exacerbate existing societal inequalities and raises profound ethical questions about the future of technology.

The Roots of Bias: Data, Design, and Human Error?

AI algorithms are only as good as the data they are trained on. Unfortunately, much of the data used in AI development is riddled with biases, reflecting the inherent prejudices woven into human society. These biases can be based on race, gender, age, religion, socioeconomic status, and other factors. 

When biased data is fed into an algorithm, the result is an AI system that perpetrates and amplifies those biases, leading to discriminatory outcomes.

The design of the models themselves can introduce bias. For example, facial recognition software has been shown to be less accurate in identifying people of colour. Similarly, algorithms used in loan approvals or job applications can inadvertently disadvantage certain groups based on biased criteria.

Human error is also to blame. The programmers, data scientists, and other individuals involved in developing AI systems are not immune to their own biases. These biases can unconsciously creep into the design and implementation of algorithms, further compounding the problem of AI discrimination.

Anthropic’s Tactics for Stopping AI Discrimination

Interestingly, we may also reduce biased answers from AI through the act of persuasion.  

Anthropic, one of the leading AI companies, recently conducted a test showing that people could persuade AI models to produce unbiased responses through prompting strategies like adding “discrimination is illegal” to their prompts. Basically, you have to instruct the model to ensure unbiased responses. 

While this strategy seemed to reduce discrimination in AI model decisions for areas like loans, jobs, insurance claims, and others, it’s only a temporary fix that addresses the symptoms, not the root cause of the problem.

Confronting the challenge demands a collaborative effort between technologists, policymakers, researchers, civil society organizations, and individuals. 

We could dive head first into addressing the biases in data, which is crucial. It necessitates diversifying datasets and employing techniques like debiasing algorithms. Additionally, developers and designers must be trained to identify and mitigate bias in their work.

Robust regulatory frameworks are equally needed to enforce ethical development and deployment of the models. Setting clear guidelines for data collection, algorithm design, and the use of AI in critical decision-making processes could help control discriminatory outcomes from AI models.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Ibiam Wayas

Ibiam is an optimistic crypto journalist. Five years from now, he sees himself establishing a unique crypto media outlet that will breach the gap between the crypto world and the general public. He loves to associate with like-minded individuals and collaborate with them on similar projects. He spends much of his time honing his writing and critical thinking skills.

Stay on top of crypto news, get daily updates in your inbox

Related News

Islamic Finance
Subscribe to CryptoPolitan