Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Medical Image Analysis: Enhancing Diagnostic Transparency and Clinical Trust
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Explainable Artificial Intelligence (XAI) is indeed causing an uproar in the sphere of medical image analysis. It is simply a matter of developing trust, creating transparency, and promoting clinical utilization of AI systems. This systematic review gathers the knowledge of 25 latest studies to provide us with a clear image of the current state of XAI, the issues in it, and of the possible future directions. The results demonstrate that there is a growing reliance on deep learning models that are enhanced with interpretability techniques, such as saliency maps, self-explainable architectures and ensemble techniques. The papers emphasize the need to consider clinical relevance, usability and compliance with regulatory standards, particularly in such crucial fields as breast cancer diagnosis, brain tumor classification and analysis of Alzheimer disease. Despite all this progress, there remain some large issues to address, such as the lack of standardized platforms of evaluation, the difficulties of connecting technical descriptions to clinical reasoning, and the necessity to uphold equitability and confidentiality in federated learning settings. Besides that, new solutions such as compression that protects privacy and multi-modal explanations are leading to more effective and reliable solutions. In this review, interdisciplinary teamwork, emphasis on human-centered design and standardization of benchmarks are highlighted as the necessary conditions to promote responsible and effective XAI in medical image analysis.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.366 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.255 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.122 Zit.