Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable artificial intelligence (XAI) in medical imaging: a systematic review of techniques, applications, and challenges
1
Zitationen
6
Autoren
2026
Jahr
Abstract
Explainable Artificial Intelligence (XAI) is crucial for enhancing transparency and trustworthiness, as well as for developing AI-based diagnostic systems in medical imaging to achieve clinical acceptability and reliability. This synthesis aligns with the current context of XAI in medical imaging on four key dimensions: trends in techniques, their application to clinical use cases or with human subjects, and the associated problems. In radiology and pathology, we report on image analysis based on current literature from credible databases. This review extends on the existing surveys by explicitly addressing the focus on feature selection (FS), graph neural networks (GNNs), and multimodal transformers and combining them to a cohesive XAI taxonomy and matches the techniques to the specific impact points in the clinical workflow in radiology and pathology. We present saliency maps, attention mechanisms, and gradient-based and rule-based elucidations of deep learning (DL) models in today’s healthcare environment. Finally, the results show that XAI significantly enhances overall clinical decision-making by making the high-level reasoning of the model readily available to users, thereby increasing their confidence in clinical decisions despite the remaining obstacles, including standardization, interpretability, data bias, and complicated data integration. From 980 records, 289 duplicates were removed, 691 screened, 209 excluded, 482 full texts assessed, 263 excluded with 219 yielding 133 included studies. Exploring this third dimension within an XAI environment identifies research gaps and paves the way for robust medical imaging XAI solutions. Clinical trial number Not applicable.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.