OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 01:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Overlooked Trustworthiness of Explainability in Medical AI

2021·9 ZitationenOpen Access
Volltext beim Verlag öffnen

9

Zitationen

5

Autoren

2021

Jahr

Abstract

Abstract While various methods have been proposed to explain AI models, the trustworthiness of the generated explanation received little examination. This paper reveals that such explanations could be vulnerable to subtle perturbations on the input and generate misleading results. On the public CheXpert dataset, we demonstrate that specially designed adversarial perturbations can easily tamper saliency maps towards the desired explanations while preserving the original model predictions. AI researchers, practitioners, and authoritative agencies in the medical domain should use caution when explaining AI models because such an explanation could be irrelevant, misleading, and even adversarially manipulated without changing the model output. AI researchers, practitioners, and authoritative agencies in the medical domain should use caution when explaining AI models because such an explanation could be irrelevant, misleading, and even adversarially manipulated without changing the model output.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Adversarial Robustness in Machine Learning
Volltext beim Verlag öffnen