OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.05.2026, 22:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Addressing Vulnerability in Medical Deep Learning through Robust Training

2023·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2023

Jahr

Abstract

Deep neural networks have been incorporated into healthcare for the purpose of diagnosing and detecting medical conditions. However, studies have shown that the vulnerability of neural networks to adversary and noise remains a pervasive problem that compromises trust of medical practitioners and accuracy in diagnosis, prognosis, and outcome prediction by such systems. In this study we show that robust training methods can help models perform more robustly against not only adversarial attacks, but also noises and calibration errors.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and EducationAutopsy Techniques and Outcomes
Volltext beim Verlag öffnen