Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Securing AI-Powered Healthcare Decision Support Systems: A Comprehensive Review of Attack Vectors and Defensive Strategies
1
Zitationen
5
Autoren
2025
Jahr
Abstract
Background: Artificial intelligence (AI) is emerging as a transformative technology in healthcare, enabling the development of AI-powered clinical decision support systems (CDSS). These systems leverage large-scale data and advanced computational algorithms to assist in diagnosis, treatment planning, and patient management. However, the integration of AI into clinical practice faces critical challenges, particularly related to cybersecurity and system vulnerability. Objectives: This review aims to evaluate the security vulnerabilities of AI-powered healthcare decision support systems by identifying common attack vectors and examining current defensive strategies. It also explores the implications of these vulnerabilities for patient safety, data integrity, and healthcare delivery. Methods: A comprehensive literature review was conducted using databases such as PubMed, Scopus, IEEE Xplore, Web of Science, SpringerLink, and Google Scholar. Articles published between 2015 and 2025 were screened using PRISMA guidelines. Keywords included "AI in healthcare", "decision support systems", "cybersecurity", "adversarial attacks", and "defensive strategies". Results: Out of 1,255 initially identified articles, 200 were included after applying inclusion and exclusion criteria. The findings reveal that AI-powered systems are susceptible to various threats, including adversarial inputs, model inversion, data poisoning, and privacy breaches. Several defensive mechanisms, such as secure model training, encryption, and adversarial detection frameworks, have been proposed and partially implemented. Conclusions: AI-powered decision support systems hold great promise in enhancing healthcare delivery. However, unresolved security vulnerabilities pose significant risks. Addressing these concerns requires multidisciplinary collaboration among AI developers, healthcare professionals, and cybersecurity experts. Future research and funding should prioritize secure deployment, ethical governance, and regulatory compliance to ensure safe and effective integration into clinical practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.