Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A review of evaluation approaches for explainable AI with applications in cardiology
48
Zitationen
9
Autoren
2024
Jahr
Abstract
Explainable artificial intelligence (XAI) elucidates the decision-making process of complex AI models and is important in building trust in model predictions. XAI explanations themselves require evaluation as to accuracy and reasonableness and in the context of use of the underlying AI model. This review details the evaluation of XAI in cardiac AI applications and has found that, of the studies examined, 37% evaluated XAI quality using literature results, 11% used clinicians as domain-experts, 11% used proxies or statistical analysis, with the remaining 43% not assessing the XAI used at all. We aim to inspire additional studies within healthcare, urging researchers not only to apply XAI methods but to systematically assess the resulting explanations, as a step towards developing trustworthy and safe models. Supplementary Information: The online version contains supplementary material available at 10.1007/s10462-024-10852-w.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.995 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.374 Zit.
"Why Should I Trust You?"
2016 · 14.750 Zit.
Generative adversarial networks
2020 · 13.352 Zit.
Autoren
Institutionen
- University of Leicester(GB)
- Queen Mary University of London(GB)
- University of Zakho(IQ)
- William Harvey Research Institute(GB)
- University of Verona(IT)
- Universitat de Barcelona(ES)
- Institució Catalana de Recerca i Estudis Avançats(ES)
- St Bartholomew's Hospital(GB)
- Barts Health NHS Trust(GB)
- The Alan Turing Institute(GB)
- Health Data Research UK(GB)