OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 06:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Post-processing methods for mitigating algorithmic bias in healthcare classification models: An extended umbrella review

2025·3 Zitationen·BMC Digital HealthOpen Access
Volltext beim Verlag öffnen

3

Zitationen

4

Autoren

2025

Jahr

Abstract

AI and predictive analytics have increased the speed of innovation in medicine. If left unchecked, however, algorithmic bias can exacerbate health disparities across race, class, or gender. Early bias mitigation literature has focused on addressing bias in the preparation and development phases of the algorithm life cycle (pre- and in-processing). Post-processing methods, applied at the point of implementation, are less computationally intensive and do not require re-building or training the model, allowing lower-resourced health systems to improve bias in off-the-shelf binary classification models, which are increasingly common within electronic medical records. This umbrella review sought to identify post-processing bias mitigation methods and tools applicable to binary healthcare classification models in healthcare and summarize bias reduction effectiveness and accuracy loss. This review was registered with PROSPERO and reported according to PRISMA 2020. PubMed and Scopus were searched in December 2023 for English-language reviews published post-2013 using an expanded search string from previous work on machine learning bias. Eligibility criteria followed the PICOT framework. Reviews were screened independently by two authors. Data were extracted from reviews using the Joanna Briggs Institute Extraction Form for Review of Reviews, as well as from cited studies (hence, an “extended” umbrella review). Quality was assessed using the Critical Appraisal Checklist for Systematic Reviews. Evidence was synthesized by mitigation method and effectiveness. Searches yielded 184 records. After duplicate removal, title/abstract, and full text screening, 11 reviews were included, citing 16 eligible studies. Post-processing methods tested included threshold adjustment (9 studies, cited by 8 reviews), reject option classification (6 studies, cited by 4 reviews), and calibration (5 studies, cited by 4 reviews). Threshold adjustment reduced bias across 8/9 trials; reject option classification and calibration reduced bias in approximately half of trials (5/8 and 4/8). Results were reported with heterogeneous fairness and accuracy metrics, making comparison difficult. A lack of effectiveness evaluation was noted across reviews. Four reviews identified 16 software libraries for addressing bias. Quality of the majority of reviews was weak due to inadequate reporting on methods. Threshold adjustment showed significant promise in post-processing bias mitigation for healthcare algorithms, followed by reject option classification and calibration. Future research should empirically compare post-processing methods on binary classification models using real-world healthcare data. As commercial algorithms proliferate, health systems require proven, achievable strategies to maximize fairness.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationArtificial Intelligence in HealthcareMachine Learning in Healthcare
Volltext beim Verlag öffnen