Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Addressing Vulnerability in Medical Deep Learning through Robust Training
0
Zitationen
2
Autoren
2023
Jahr
Abstract
Deep neural networks have been incorporated into healthcare for the purpose of diagnosing and detecting medical conditions. However, studies have shown that the vulnerability of neural networks to adversary and noise remains a pervasive problem that compromises trust of medical practitioners and accuracy in diagnosis, prognosis, and outcome prediction by such systems. In this study we show that robust training methods can help models perform more robustly against not only adversarial attacks, but also noises and calibration errors.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.638 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.884 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.747 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.478 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.662 Zit.