OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.03.2026, 12:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Considerations for Patient Privacy of Large Language Models in Health Care: Scoping Review (Preprint)

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> The application of large language models (LLMs) in health care holds significant potential for enhancing patient care and advancing medical research. However, the protection of patient privacy remains a critical issue, especially when handling patient health information (PHI). </sec> <sec> <title>OBJECTIVE</title> This scoping review aims to evaluate the adequacy of current approaches and identify areas in need of improvement to ensure robust patient privacy protection in the existing studies about PHI-LLMs within the health care domain. </sec> <sec> <title>METHODS</title> A search of the literature published from January 1, 2022, to July 20, 2025, was performed on July 20, 2025, using 2 databases (PubMed and Embase). This scoping review focused on the following three research questions: (1) What studies on the development and application of LLMs using PHI currently exist within the health care domain? (2) What patient privacy considerations are addressed in existing PHI-LLMs research, and are these measures sufficient? (3) How can future research on the development and application of LLMs using PHI better protect patient privacy? Studies were included if they focused on the development and application of LLMs within health care using PHI, encompassing activities such as model construction, fine-tuning, optimization, testing, and performance comparison. Eligible literature comprised original research articles written in English. Conversely, studies were excluded if they used publicly available datasets, under the assumption that such data have been adequately deidentified. Additionally, non-English publications, reviews, abstracts, incomplete reports, and preprints were excluded from the review due to the lack of rigorous peer review. </sec> <sec> <title>RESULTS</title> This study systematically identified 9823 studies on PHI-LLM and included 464 studies published between 2022 and 2025. Among the 464 studies, (1) a small number of studies neglected ethical review (n=45, 9.7%) and patient informed consent (n=148, 31.9%) during the research process, (2) more than a third of the studies (n=178, 38.4%) failed to report whether to implement effective measures to protect PHI, and (3) there was a significant lack of transparency and comprehensive detail in anonymization and deidentification methods. </sec> <sec> <title>CONCLUSIONS</title> We propose comprehensive recommendations across 3 phases—study design, implementation, and reporting—to strengthen patient privacy protection and transparency in PHI-LLM. This study emphasizes the urgent need for the development of stricter regulatory frameworks and the adoption of advanced privacy protection technologies to effectively safeguard PHI. It is anticipated that future applications of LLMs in the health care field will achieve a balance between innovation and robust patient privacy protection, thereby enhancing ethical standards and scientific credibility. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcarePrivacy-Preserving Technologies in Data
Volltext beim Verlag öffnen