Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards Trustworthy Alzheimer’s Diagnosis: Systematic Evaluation of Explainable Artificial Intelligence Methods on Neuroimaging Data
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Machine learning has attained state-of-the-art performance in the neuroimaging field, including disease classification, brain stage prediction, and the cognitive state of the brain. However, the black-box nature of these models raises concern and trust, especially in cases of clinical acceptance. Explainable Artificial Intelligence (XAI) aims to address these challenges by providing insights into how these models arrive at their decisions. This paper presents a comparative evaluation of six prominent XAI methods, which are Partial Dependence Plot (PDP), Feature Importance, Surrogate Models, Individual Conditional Expectation (ICE), SHapley Additive exPlanations (SHAP), and Local Interpretable Model-agnostic Explanations (LIME), applied to neuroimaging data. Using the Open Access Series of Imaging Studies (OASIS), we evaluate the strengths and the limitations of each XAI method. In this manuscript, we conduct an in-depth analysis of the six XAI methods and explore their specific use cases. We conclude by proposing a comparative study for selecting XAI methods specifically for the neuroimaging applications.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.