OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 17:26

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Using Adversarial Images to Assess the Stability of Deep Learning Models Trained on Diagnostic Images in Oncology

2021·3 ZitationenOpen Access
Volltext beim Verlag öffnen

3

Zitationen

10

Autoren

2021

Jahr

Abstract

Abstract Purpose Deep learning (DL) models have rapidly become a popular and cost-effective tool for image classification within oncology. A major limitation of DL models is output instability, as small perturbations in input data can dramatically alter model output. The purpose of the study is to investigate the robustness of DL models in the oncologic image domain through the application of adversarial images : manipulated images with small pixel-level perturbations designed to assess the stability of DL models. Experimental Design We examined the impact of adversarial images on the classification accuracies of DL models trained to classify cancerous lesions across three common oncologic imaging modalities (CT, mammogram, and MRI). The CT model was trained to classify malignant lung nodules using the LIDC dataset. The mammogram model was trained to classify malignant breast lesions using the DDSM dataset. The MRI model was trained to classify brain metastases using an institutional dataset. We also explored the utility of an iterative adversarial training approach to improve the stability of DL models to small pixel-level changes. Results Oncologic images showed instability with small pixel-level changes. A pixel-level of perturbation of .004 resulted in a majority of oncologic images to be misclassified by their respective DL models (CT 25.64%, mammogram 23.93%, MRI 6.36%). Adversarial training mitigated improved the stability and robustness of DL models trained on oncologic images compared to naive models [(CT 67.72% vs 26.92%), mammogram (63.39% vs 27.68%), MRI (87.20% vs 24.32%)]. Conclusions DL models naively trained on oncologic images exhibited dramatic instability to small pixel-level changes resulting in substantial decreases in accuracy. Adversarial training techniques improved the stability and robustness of DL models to such pixel-level changes. Prior to clinical implementation, adversarial training should be considered to proposed DL models to improve overall performance and safety.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical ImagingAdversarial Robustness in Machine Learning
Volltext beim Verlag öffnen