OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 18:49

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Algorithmic Fairness and Bias Mitigation for Clinical Machine Learning: A New Utility for Deep Reinforcement Learning

2022·8 ZitationenOpen Access
Volltext beim Verlag öffnen

8

Zitationen

3

Autoren

2022

Jahr

Abstract

Abstract As machine learning-based models continue to be developed for healthcare applications, greater effort is needed in ensuring that these technologies do not reflect or exacerbate any unwanted or discriminatory biases that may be present in the data. In this study, we introduce a reinforcement learning framework capable of mitigating biases that may have been acquired during data collection. In particular, we evaluated our model for the task of rapidly predicting COVID-19 for patients presenting to hospital emergency departments, and aimed to mitigate any site-specific (hospital) and ethnicity-based biases present in the data. Using a specialized reward function and training procedure, we show that our method achieves clinically-effective screening performances, while significantly improving outcome fairness compared to current benchmarks and state-of-the-art machine learning methods. We performed external validation across three independent hospitals, and additionally tested our method on a patient ICU discharge status task, demonstrating model generalizability.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Machine Learning in Healthcare
Volltext beim Verlag öffnen