Study Reveals Why AI Models That Analyze Medical Images Can Be Biased

Posted by
Check your BMI

AI models analyzing medical images can introduce biases, highlighting the need for local data training to ensure fairness and accuracy.

 

Copyright: news.mit.edu – “Study Reveals Why AI Models That Analyze Medical Images Can Be Biased”


 

SwissCognitive_Logo_RGB

toonsbymoonlight
These models, which can predict a patient’s race, gender, and age, seem to use those traits as shortcuts when making medical diagnoses.

Artificial intelligence models often play a role in medical diagnoses, especially when it comes to analyzing images such as X-rays. However, studies have found that these models don’t always perform well across all demographic groups, usually faring worse on women and people of color.

These models have also been shown to develop some surprising abilities. In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays — something that the most skilled radiologists can’t do.

That research team has now found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps” — that is, discrepancies in their ability to accurately diagnose images of people of different races or genders. The findings suggest that these models may be using “demographic shortcuts” when making their diagnostic evaluations, which lead to incorrect results for women, Black people, and other groups, the researchers say.

“It’s well-established that high-capacity machine-learning models are good predictors of human demographics such as self-reported race or sex or age. This paper re-demonstrates that capacity, and then links that capacity to the lack of performance across different groups, which has never been done,” says Marzyeh Ghassemi, an MIT associate professor of electrical engineering and computer science, a member of MIT’s Institute for Medical Engineering and Science, and the senior author of the study.

The researchers also found that they could retrain the models in a way that improves their fairness. However, their approached to “debiasing” worked best when the models were tested on the same types of patients they were trained on, such as patients from the same hospital. When these models were applied to patients from different hospitals, the fairness gaps reappeared.[…]

Read more: www.news.mit.edu

Der Beitrag Study Reveals Why AI Models That Analyze Medical Images Can Be Biased erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.