Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Development and Evaluation of an Explainable Diagnostic AI for Alzheimer's Disease
10
Zitationen
1
Autoren
2023
Jahr
Abstract
Alzheimer's Disease (AD) is a progressive neurode-generative disease that is estimated to affect 24.3 million people worldwide, and with an expected rise in cases, an accurate and efficient diagnosis method is necessary. Machine learning has been implemented for diagnosing AD in previous studies utilizing various modalities of data. More specifically, deep learning has been shown to perform at very high accuracy rates. However, many healthcare applications of deep learning paradigms lack trust due to their lack of interpretability. To provide evidence for the potential of deep learning to meet established standards of accurate diagnosis, we have designed an explainable diagnostic machine learning model for predicting AD severity levels. Using two open-source Magnetic Resonance Imaging (MRI) datasets, we develop and evaluate a Convolutional Neural Network (CNN). An accuracy rate of 99.9% was achieved using the CNN model, which outperforms other models trained on the same dataset. To improve model transparency, we leverage the explainable AI technique SHapley Additive exPlanations (SHAP) to interpret the predictions of the CNN. The implementation of explainable AI demonstrates that the predictions of the model are influenced by well-known pathological indicators of AD.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.218 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.589 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.386 Zit.