Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Adversarial artificial intelligence in radiology: Attacks, defenses, and future considerations
9
Zitationen
3
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is rapidly transforming radiology, with applications spanning disease detection, lesion segmentation, workflow optimization, and report generation. As these tools become more integrated into clinical practice, new concerns have emerged regarding their vulnerability to adversarial attacks. This review provides an in-depth overview of adversarial AI in radiology, a topic of growing relevance in both research and clinical domains. It begins by outlining the foundational concepts and model characteristics that make machine learning systems particularly susceptible to adversarial manipulation. A structured taxonomy of attack types is presented, including distinctions based on attacker knowledge, goals, timing, and computational frequency. The clinical implications of these attacks are then examined across key radiology tasks, with literature highlighting risks to disease classification, image segmentation and reconstruction, and report generation. Potential downstream consequences such as patient harm, operational disruption, and loss of trust are discussed. Current mitigation strategies are reviewed, spanning input-level defenses, model training modifications, and certified robustness approaches. In parallel, the role of broader lifecycle and safeguard strategies are considered. By consolidating current knowledge across technical and clinical domains, this review helps identify gaps, inform future research priorities, and guide the development of robust, trustworthy AI systems in radiology.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.316 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.385 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.292 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.257 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.488 Zit.