Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Quantifying Trustworthiness of Explainability in Medical AI
2
Zitationen
6
Autoren
2022
Jahr
Abstract
<title>Abstract</title> Saliency visualization methods help explain artificial intelligence (AI) models and build the trust of AI-driven medical image analysis applications. However, the trustworthiness of the generated explanations is often overlooked. Our article demonstrates that the vulnerabilities of such explanations to subtle perturbations of the input can lead to misleading results. More importantly, we show that vulnerabilities can be exploited without knowing the details of an AI model. We then propose criteria and methods to evaluate the trustworthiness of saliency maps and report a series of systematic evaluation and reader studies of widely adopted deep neural networks on large scale public datasets. The results show that the saliency may not be relevant to the model output, and it can be tampered by maintaining the model output without knowing the model specifics. Evidence suggests that AI researchers, practitioners, and authoritative agencies in the medical domain should use caution when explaining AI models.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.