Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Medical Imaging: A Taxonomy Based on Clinical Task Requirements
1
Zitationen
4
Autoren
2025
Jahr
Abstract
Explainable Artificial Intelligence (XAI) has emerged as a critical enabler for deploying AI-driven medical imaging systems where transparency, trust, and accountability are paramount. However, most current taxonomies of XAI methods categorize techniques based on algorithmic families (e.g., saliency maps, attribution methods), which often fail to reflect the practical requirements of clinical tasks. This paper proposes a novel task-centric taxonomy of XAI in medical imaging that aligns explanation techniques with four key clinical tasks: classification, detection, segmentation, and prognostic assessment. For each task, we analyze how different XAI methods enhance model interpretability, their suitability for clinical decision-making, and their limitations in real-world applications. Our taxonomy aims to provide a practical framework for researchers and practitioners to select appropriate XAI strategies tailored to the specific demands of medical imaging workflows. Furthermore, we highlight the current gaps in task-specific explainability and propose future research directions towards clinically meaningful, task-driven XAI solutions. This work serves as a step towards bridging the gap between technical XAI developments and the functional needs of clinical practice.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.326 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.218 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.111 Zit.