Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Insider Threats in Healthcare Application: Harnessing AI To Mitigate The Risks
0
Zitationen
3
Autoren
2024
Jahr
Abstract
In the evolving landscape of healthcare, organizations increasingly rely on digital systems to manage patient data, streamline operations, and enhance outcomes. This dependence introduces significant risks, particularly from insider threats. These threats, originating from employees, contractors, or other trusted individuals, pose challenges such as data breaches, financial losses, and compromised patient care. The primary challenge is the sophisticated and deceptive nature of insider threats. They can manifest as unauthorized access to Electronic Health Records (EHR), data manipulation within Population Health Management (PHM) tools, sabotage of Clinical Decision Support Systems (CDSS), theft of proprietary AI algorithms, fraudulent billing activities, and negligence in handling confidential data. These actions can lead to severe repercussions, including regulatory penalties and loss of patient trust. To address these risks, healthcare organizations are increasingly adopting Artificial Intelligence (AI) and Machine Learning (ML) technologies. These technologies offer robust solutions for detecting and preventing insider threats through techniques such as anomaly detection, behavioral biometrics, natural language processing (NLP), predictive modeling, graph analytics, and automated incident response. AI and ML enable proactive detection, real-time monitoring, reduced false positives, scalability, and adaptive learning. Deploying AI-based detection tools significantly enhances the ability to protect sensitive patient data and maintain system integrity. This paper explores the efficacy of AI in mitigating the risks from insider threats in healthcare and provides insights into the ethical implications of its deployment.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.