OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 20:22

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Quantifying Trustworthiness of Explainability in Medical AI

2022·2 ZitationenOpen Access
Volltext beim Verlag öffnen

2

Zitationen

6

Autoren

2022

Jahr

Abstract

<title>Abstract</title> Saliency visualization methods help explain artificial intelligence (AI) models and build the trust of AI-driven medical image analysis applications. However, the trustworthiness of the generated explanations is often overlooked. Our article demonstrates that the vulnerabilities of such explanations to subtle perturbations of the input can lead to misleading results. More importantly, we show that vulnerabilities can be exploited without knowing the details of an AI model. We then propose criteria and methods to evaluate the trustworthiness of saliency maps and report a series of systematic evaluation and reader studies of widely adopted deep neural networks on large scale public datasets. The results show that the saliency may not be relevant to the model output, and it can be tampered by maintaining the model output without knowing the model specifics. Evidence suggests that AI researchers, practitioners, and authoritative agencies in the medical domain should use caution when explaining AI models.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen