OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.04.2026, 01:41

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluation and improvement of algorithmic fairness for COVID-19 severity classification using Explainable Artificial Intelligence-based bias mitigation

2025·0 Zitationen·JAMIA OpenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

Objectives: The COVID-19 pandemic has highlighted the growing reliance on machine learning (ML) models for predicting disease severity, which is important for clinical decision-making and equitable resource allocation. While achieving high predictive accuracy is important, ensuring fairness in the prediction output of these models is equally important to prevent bias-driven disparities in healthcare. This study evaluates fairness in a machine learning-based COVID-19 severity classification model and proposes an Explainable AI (XAI)-based bias mitigation strategy to address sex-related bias. Materials and Methods: Using data from the Quebec Biobank, we developed an XGBoost-based multi-class classification model. Fairness was assessed using Subset Accuracy Parity Difference (SAPD) and Label-wise Equal Opportunity Difference (LEOD) metrics. Four bias mitigation strategies were implemented and evaluated: Fair Representation Learning, Fair Classifier Using Constraints, Adversarial Debiasing, and our proposed XAI-based method utilizing SHapley Additive exPlanations (SHAP) method for feature importance analysis. Results: The study cohort included 1642 COVID-19 positive older adults (mean age: 77.5), balance equally between males and females. The baseline (unmitigated) classification model achieved 90.68% accuracy but exhibited a 10.11% Subset Accuracy Parity Difference between sexes, indicating a relatively large bias. The introduced XAI-based method demonstrated a better trade-off between model performance and fairness compared to existing bias mitigation methods by identifying sex-sensitive feature interactions and integrating them into the model re-training. Discussion: Traditional fairness interventions often compromise accuracy to a greater extent. Our XAI-based method achieves the best balance between classification performance and bias, enhancing its clinical applicability. Conclusion: The XAI-driven bias mitigation intervention effectively reduces sex-based disparities in COVID-19 severity prediction without the significant accuracy loss observed in traditional methods. This approach provides a framework for developing fair and accurate clinical decision support systems for older adults, which ensures equitable care in clinical risk stratification and resource allocation.

Ähnliche Arbeiten