Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Adversarial attacks on hybrid classical-quantum deep learning models for histopathological cancer detection
1
Zitationen
5
Autoren
2025
Jahr
Abstract
We analyzed the application of quantum machine learning in histopathological cancer detection under adversarial attacks, demonstrating its potential to enhance diagnostic performance in adverse circumstances. Adversarial attacks are one of the major concerns in any image classification machine learning model and are responsible for perturbing the original input images, which, therefore, results in misclassification. To encounter this problem, we first developed the hybrid quantum transfer learning model by incorporating multiple transfer learning architectures such as ResNet-18, VGG-16, Inception-v3, and AlexNet with variational quantum circuits for histopathological cancer detection. Second, we introduced white-box adversarial attacks using the Fast Gradient Sign Method (FGSM) and DeepFool and projected gradient descent (PGD) methods in this model and evaluated each model’s performance against these adversarial attacks. We analyzed that the Hybrid Classical Quantum Deep Learning model (HCQ-DL) with ResNet-18 provides 78.05% accuracy compared to the Classical ResNet-18 model with the highest accuracy of 50.84% for FGSM attacks. Similarly, for DeepFool attacks, HCQ-DL with ResNet-18 performs 52.12% accurately compared to the Classical ResNet-18 model with the highest accuracy of 37.87% and for PGD attacks, HCQ-DL with ResNet-18 has 52.94% performance accuracy compared to the Classical Inception-V3 model with the highest accuracy of 32.05%. As a result, we observed that HCQ-DL models are more resilient to these adversarial attacks compared to classical deep learning models and show potential to achieve greater robustness when combined with additional defense techniques.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.416 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.552 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.448 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.347 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.535 Zit.