OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 05:41

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Consistency of XAI Models against Medical Expertise: An Assessment Protocol

2024·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

6

Autoren

2024

Jahr

Abstract

Despite the significant advances made by Artificial Intelligence (AI) models in enhancing medical diagnostics and prognostics, their opacity poses a hurdle to widespread clinical adoption. In this regard, Explainable AI (XAI) aims to demystify these complex models, such as neural networks, by revealing the reasoning behind predictions. However, a notable gap exists in enabling non-experts to verify these explanations, necessitating human-in-the-loop evaluation. This paper introduces a systematic protocol, including a novel “consistency” metric, to evaluate the SHAP-based explanations of XAI, comparing them against the clinical knowledge of expert clinicians. We demonstrate how this metric could facilitate both global and feature-specific analyses, operating at the level of individual instances, and thus enhancing AI transparency. It is conceived that the implications of this work may extend beyond the medical context, offering a standardized methodology that could potentially improve the interpretability and acceptance of AI systems in diverse domains.

Ähnliche Arbeiten