Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
XIME3D:A Systematic Framework for Evaluating Explainable AI in 3D Medical Imaging under CT Image Pre-Processing Variations
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Recent advancements in deep learning have enabled expert-level performance in medical imaging for disease classification, but their black-box decision making processes limit trust in them and their wide-spread clinical deployment. While Explainable Artificial Intelligence (XAI) methods aim to bridge this gap, studies focus on 2D data or pre-processed research datasets that overlook the role of medical imaging pre-processing operations which is an essential component of real-world 3D medical imaging workflows. To address this limitation, we propose XIME3D, a systematic and predictive model–centered framework for evaluating explainability under realistic medical pre-processing conditions for volumetric medical data. The framework integrates five volumetric pre-processing variants and ten post-hoc attribution methods, evaluated through three complementary criteria: Correctness, Contrastivity, and Completeness, which together evaluate explanation dependence on model input, internal structure, and output behavior. Across more than 300 experimental configurations, XIME3D reveals that gradient-based methods, such as Integrated Gradients and Blur Integrated Gradients, provide the most consistent and model-aligned explanations, while noise-based approaches like SmoothGrad and VarGrad are less sensitive to model behavior. These findings emphasize the importance of clinically realistic evaluation pipelines for reliable explainability in 3D medical imaging.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.