🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

Can AI Become Unbiased in Healthcare, Researchers Struggle to Find Answer

354576
Can AI Become Unbiased in Healthcare, Researchers Struggle to Find AnswerCan AI Become Unbiased in Healthcare, Researchers Struggle to Find Answer

In this post:

  • MIT researchers found that AI might exacerbate inequities and biases in the healthcare system.
  • The team discovered four subpopulation shifts that cause biases in machine learning models.
  • To achieve more equitable AI models in healthcare, the researchers said there remains a need to understand the sources of unfairness better.

Every patient, irrespective of their physical characteristics and identities, should have access to good healthcare. However, certain people or groups have often been deprived the fairness in the healthcare system due to the issue of inequities and implicit biases in medical treatment and diagnosis. 

AI Models in Healthcare are Biased

A group of researchers from the Massachusetts Institute of Technology (MIT) have now found that artificial intelligence and machine learning have the tendency to further stir healthcare disparities and inequities among subgroups, often underrepresented. This can affect how these groups are diagnosed and treated. 

Led by an assistant professor in MIT’s Department of Electrical Science and Engineering (EECS), Marzyeh Ghassemi, the researchers released a research paper analyzing the roots of the disparities that can arise in AI, causing the models that perform well overall to falter when it involves underrepresented subgroups.

The analysis focused on “subpopulation shifts,” which the report defined as the “differences in the way machine learning models perform for one subgroup as compared to another.” The primary objective was to determine the kinds of subpopulation shifts that can occur with AI techniques and potentially inform future advancements for more equitable models. 

“We want the models to be fair and work equally well for all groups, but instead, we consistently observe the presence of shifts among different groups that can lead to inferior medical diagnosis and treatment,” says MIT Ph.D. student Yuzhe Yang.

See also  The crypto bull run will not last as long as you think

Researchers Identify 4 Shifts That Stir Biases in AI Models

The MIT researchers identified four types of shifts – spurious correlations, attribute imbalance, class imbalance, and attribute generalization – which causes inequities and biases in AI techniques. 

“Biases can, in fact, stem from what the researchers call the class, or from the attribute, or both,” the report reads. 

The researchers gave an instance where machine learning models were used to determine whether a person has pneumonia or not based on an examination of X-ray images, with two attributes – the people getting X-rayed are either female or male, and two classes – one consisting of people who have the lung ailment, and another infection-free.

“If, in this particular dataset, there were 100 males diagnosed with pneumonia for every female diagnosed with pneumonia, that could lead to an attribute imbalance, and the model would likely do a better job of correctly detecting pneumonia for a man than for a woman,” the team explained.

Can AI Models Work in an Unbiased Manner?

The MIT researchers said they were able to reduce the occurrence of spurious correlations, class imbalance, and attribute imbalance by improving the “classifier” and “encoder.” However, the other shift, “attribute generalization,” persisted. 

“No matter what we did to the encoder or classifier, we did not see any improvements in terms of attribute generalization,” Yang says, “and we don’t yet know how to address that.”

See also  PancakeSwap introduces SpringBoard, its own meme token launch platform

The team is currently exploring public datasets for tens of thousands of patients and chest X-rays to determine whether healthcare professionals can achieve fairness in medical diagnosis and treatment in machine learning models.

However, they acknowledged that there is still a need for a better understanding of the sources of unfairness and how they permeate our current system to arrive at the equitability desired. 

A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan