Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Post-hoc eXplainable AI methods for analyzing medical images of gliomas (— A review for clinical applications)
1
Zitationen
7
Autoren
2025
Jahr
Abstract
Deep learning (DL) has shown promise in glioma imaging tasks using magnetic resonance imaging (MRI) and histopathology images, yet their complexity demands greater transparency in artificial intelligence (AI) systems. This is noticeable when users must understand the model output for a clinical application. In this systematic review, 65 post-hoc eXplainable AI (XAI), or interpretable AI studies, have been reviewed that provide an understanding of why a system generated a given output for tasks related to glioma imaging. A framework of post-hoc XAI methods, such as Gradient-based XAI (G-XAI) and Perturbation-based XAI (P-XAI), is introduced to evaluate deep models and explain their application in gliomas. The papers on XAI techniques in gliomas are surveyed and categorized by their specific aims such as grading, genetic biomarker detection, localization, intra-tumoral heterogeneity assessment, and survival analysis, and their XAI approach. This review highlights the growing integration of XAI in glioma imaging, demonstrating their role in bridging AI decision-making and medical diagnostics. The co-occurrence analysis emphasizes their role in enhancing model transparency and trust and guiding future research toward more reliable clinical applications. Finally, the current challenges associated with DL and XAI approaches and their clinical integration are discussed with an outlook on future opportunities from clinical users' perspectives and upcoming trends in XAI.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.246 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.228 Zit.
"Why Should I Trust You?"
2016 · 14.150 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.091 Zit.