Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Integrating explainability and bias detection in binary medical image classification: a systematic review
0
Zitationen
3
Autoren
2025
Jahr
Abstract
This systematic review explores how recent medical imaging studies combine explainability and bias detection in binary classification models, with a focus on promoting fairness and transparency in clinical AI. Following PRISMA guidelines, we analysed 34 studies (peer-reviewed publications and eligible preprints) published between 2020 and 2025 across radiology, dermatology, and cross-domain applications, and we appraised risk of bias using the PROBAST tool. Radiology dominates the field, largely due to the availability of public datasets and established fairness metrics such as AUC disparities and True Positive Rate differences (bias detection/auditing). Most studies use post-hoc explainability tools like GradCAM to highlight influential image regions, SHAP to assign feature contribution scores, and LIME to explain model behavior through input perturbations. Several hybrid methods have shown promise, including adversarial debiasing (training models to reduce subgroup performance gaps), concept activation (linking decisions to human-understandable concepts), and prototype learning (using representative examples to guide classification). In dermatology, researchers focus on reducing skin tone bias through Fitzpatrick type stratification and tools like GEBI, a method for visualizing model sensitivity, and counterfactual explanations that reveal how small input changes could alter predictions. Cross-domain studies emphasize generalizability, employing multimodal inputs and causal modeling to handle dataset and context shifts. Overall, this review highlights the increasing sophistication of methods that integrate interpretability and fairness-an essential step toward ethical and robust AI deployment in healthcare.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.