Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Learning unbiased risk prediction based algorithms in healthcare: A case study with primary care patients
0
Zitationen
4
Autoren
2025
Jahr
Abstract
The proliferation of Artificial Intelligence (AI) has revolutionized the healthcare domain with technological advancements in conventional diagnosis and treatment methods. These advancements lead to faster disease detection, and management and provide personalized healthcare solutions. However, most of the clinical AI methods developed and deployed in hospitals have algorithmic and data-driven biases due to insufficient representation of specific race, gender, and age group which leads to misdiagnosis, disparities, and unfair outcomes. Thus, it is crucial to thoroughly examine these biases and develop computational methods that can mitigate biases effectively. This paper critically analyzes this problem by exploring different types of data and algorithmic biases during both pre-processing and post-processing phases to uncover additional, previously unexplored biases in a widely used real-world healthcare dataset of primary care patients. Additionally, effective strategies are proposed to address gender, race, and age biases, ensuring that risk prediction outcomes are equitable and impartial. Through experiments with various machine learning algorithms leveraging the Fairlearn tool, we have identified biases in the dataset, compared the impact of these biases on the prediction performance, and proposed effective strategies to mitigate these biases. Our results demonstrate clear evidence of racial, gender-based, and age-related biases in the healthcare dataset used to guide resource allocation for patients and have profound impact on the prediction performance which leads to unfair outcomes. Thus, it is crucial to implement mechanisms to detect and address unintended biases to ensure a safe, reliable, and trustworthy AI system in healthcare. • Revisit a real-world healthcare dataset to examine pre- and post-processing biases. • Propose effective strategies to mitigate gender, racial, and age biases. • Leverage FairLearn to identify and mitigate biases. • Provide practical insights for AI in healthcare, aiding computer scientists and clinicians.
Ähnliche Arbeiten
Biostatistical Analysis
1996 · 35.446 Zit.
UCI Machine Learning Repository
2007 · 24.290 Zit.
An introduction to ROC analysis
2005 · 20.732 Zit.
The use of the area under the ROC curve in the evaluation of machine learning algorithms
1997 · 7.132 Zit.
A method of comparing the areas under receiver operating characteristic curves derived from the same cases.
1983 · 7.070 Zit.