OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 03.04.2026, 04:09

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Debias-CLR: A Contrastive Learning Based Debiasing Method for Algorithmic Fairness in Healthcare Applications

2024·2 Zitationen
Volltext beim Verlag öffnen

2

Zitationen

4

Autoren

2024

Jahr

Abstract

Artificial intelligence based predictive models trained on the clinical notes of patients can be demographically biased, often influenced by the demographic distribution of the training data. This could lead to adverse healthcare disparities in predicting outcomes like length of stay of the patients. To avoid such possibilities, it is necessary to mitigate the demographic biases within these models so that the model predicts outcomes for individual patients in a fair manner. We proposed an implicit in-processing debiasing method to combat disparate treatment which occurs when the machine learning model predict different outcomes for individuals based on the sensitive attributes like gender, ethnicity, race, and likewise. For this purpose, we used clinical notes of heart failure patients and used diagnostic codes, procedure reports and physiological vitals of the patients. We used Clinical Bidirectional Encoder Representations from Transformers (Clinical BERT) to obtain feature embeddings within the diagnostic codes and procedure reports, and Long Short-Term Memory (LSTM) autoencoders to obtain feature embeddings within the physiological vitals. Then, we trained two separate deep learning contrastive learning frameworks, one for gender and the other for ethnicity to obtain debiased representations within those demographic traits. We called this debiasing framework as Debias-CLR. We leveraged clinical phenotypes of the patients identified in the diagnostic codes and procedure reports in the previous study to measure the fairness statistically. We found that Debias-CLR was able to reduce the Single-Category Word Embedding Association Test (SC-WEAT) effect size score when debiasing for gender from 0.8 to 0.3 and from 0.4 to 0.2 while using clinical phenotypes in the diagnostic codes and procedure reports respectively as targets. Similarly, after debiasing for ethnicity, the SC-WEAT effect size score reduced from 1 to 0.5 and from -1 to 0.3 in an opposite bias direction while using clinical phenotypes in the diagnostic codes and procedure reports respectively as targets. We further found that in order to obtain fair representations in the embedding space using Debias-CLR, the accuracy of the predictive models on downstream tasks like predicting length of stay of the patients did not get reduced as compared to using the un-debiased counterparts for training the predictive models. Hence, we conclude that our proposed approach, Debias-CLR is fair and representative in mitigating demographic biases and can reduce health disparities by making fair predictions for the underrepresented populations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationPrivacy-Preserving Technologies in Data
Volltext beim Verlag öffnen