OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 11:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Towards Trustworthy Alzheimer’s Diagnosis: Systematic Evaluation of Explainable Artificial Intelligence Methods on Neuroimaging Data

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Machine learning has attained state-of-the-art performance in the neuroimaging field, including disease classification, brain stage prediction, and the cognitive state of the brain. However, the black-box nature of these models raises concern and trust, especially in cases of clinical acceptance. Explainable Artificial Intelligence (XAI) aims to address these challenges by providing insights into how these models arrive at their decisions. This paper presents a comparative evaluation of six prominent XAI methods, which are Partial Dependence Plot (PDP), Feature Importance, Surrogate Models, Individual Conditional Expectation (ICE), SHapley Additive exPlanations (SHAP), and Local Interpretable Model-agnostic Explanations (LIME), applied to neuroimaging data. Using the Open Access Series of Imaging Studies (OASIS), we evaluate the strengths and the limitations of each XAI method. In this manuscript, we conduct an in-depth analysis of the six XAI methods and explore their specific use cases. We conclude by proposing a comparative study for selecting XAI methods specifically for the neuroimaging applications.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen