Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Defense Against Adversarial Attacks Based on Stochastic Descent Sign Activation Networks on Medical Images
7
Zitationen
3
Autoren
2022
Jahr
Abstract
Machine learning techniques in medical imaging systems are accurate, but minor perturbations in the data known as adversarial attacks can fool them. These attacks make the systems vulnerable to fraud and deception, and thus a significant challenge has been posed in practice. We present the gradient-free trained sign activation networks to detect and deter adversarial attacks on medical imaging AI systems. Experimental results show that a higher distortion value is required to attack our proposed model than the other existing state-of-the-art models on MRI, Chest X-ray, and Histopathology image datasets, where our model outperforms the best and is even twice superior. The average accuracy of our model in classifying the adversarial examples is 88.89%, whereas those for MLP and LeNet are 81.48%, and that of ResNet18 is 38.89%. It is concluded that the sign network is a solution to defend adversarial attacks due to high distortion and high accuracy on transferability. Our work is a significant step towards safe and secure medical AI systems.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.310 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.369 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.289 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.234 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.483 Zit.