OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 12:14

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Assessing Vulnerabilities of Deep Learning Explainability in Medical Image Analysis Under Adversarial Settings

2023·5 Zitationen
Volltext beim Verlag öffnen

5

Zitationen

4

Autoren

2023

Jahr

Abstract

Deep Learning (DL) is a valuable set of techniques that improve medical decision-making based on imaging exams, such as Chest X-rays (CXR), Computed Tomography (CT), and Optical Coherence Tomography (OCT). However, DL models may be susceptible to adversarial attacks when perturbed (tam-pered) examples sneak into the data, decreasing the model's confidence. In this paper, we evaluate the vulnerabilities of DL applied to medical images and analyze the effects of attacks on the Gradient-weighted Class Activation Mapping (GRAD-CAM). Our experiments were conducted on two scenarios: (i) CXR images with binary class; (ii) OCT images with multi-class. Vulnerabilities are described by Fooling Rate (FR) and visual analysis of Grad-CAM. We show that the PGD is the most malicious deed for multi-class, reaching an FR of up to 96%, whereas DeepFool is hurtful for binary classes, reaching an FR of up to 93%. Our analysis can be used to understand the adversarial attacks over medical images and their effects on explainability. The developed code is available at GitHub <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> https://github.com/eriksonJAguiar/Grad-Attacks-CBMS-2023.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and EducationAnomaly Detection Techniques and Applications
Volltext beim Verlag öffnen