Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Consistency of XAI Models against Medical Expertise: An Assessment Protocol
1
Zitationen
6
Autoren
2024
Jahr
Abstract
Despite the significant advances made by Artificial Intelligence (AI) models in enhancing medical diagnostics and prognostics, their opacity poses a hurdle to widespread clinical adoption. In this regard, Explainable AI (XAI) aims to demystify these complex models, such as neural networks, by revealing the reasoning behind predictions. However, a notable gap exists in enabling non-experts to verify these explanations, necessitating human-in-the-loop evaluation. This paper introduces a systematic protocol, including a novel “consistency” metric, to evaluate the SHAP-based explanations of XAI, comparing them against the clinical knowledge of expert clinicians. We demonstrate how this metric could facilitate both global and feature-specific analyses, operating at the level of individual instances, and thus enhancing AI transparency. It is conceived that the implications of this work may extend beyond the medical context, offering a standardized methodology that could potentially improve the interpretability and acceptance of AI systems in diverse domains.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.