OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 15:09

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Cybersecurity Threats and Mitigation Strategies for Large Language Models in Health Care

2025·6 Zitationen·Radiology Artificial Intelligence
Volltext beim Verlag öffnen

6

Zitationen

10

Autoren

2025

Jahr

Abstract

The integration of large language models (LLMs) into health care offers tremendous opportunities to improve medical practice and patient care. Besides being susceptible to biases and threats common to all artificial intelligence (AI) systems, LLMs pose unique cybersecurity risks that must be carefully evaluated before these AI models are deployed in health care. LLMs can be exploited in several ways, such as malicious attacks, privacy breaches, and unauthorized manipulation of patient data. Moreover, malicious actors could use LLMs to infer sensitive patient information from training data. Furthermore, manipulated or poisoned data fed into these models could change their results in a way that is beneficial for the malicious actors. This report presents the cybersecurity challenges posed by LLMs in health care and provides strategies for mitigation. By implementing robust security measures and adhering to best practices during the model development, training, and deployment stages, stakeholders can help minimize these risks and protect patient privacy. <b>Keywords:</b> Computer Applications-General (Informatics), Application Domain, Large Language Models, Artificial Intelligence, Cybersecurity © RSNA, 2025.

Ähnliche Arbeiten