OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 22:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Completeness and Quality of Neurology Referral Letters Generated by a Large Language Model for Standardized Scenarios

2025·0 Zitationen·MedicinaOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

<i>Background and Objectives</i>: Large language models (LLMs) offer promising applications in healthcare, including drafting referral letters. However, access to LLMs specifically designed for medical practice remains limited. While ChatGPT is widely available, its ability to generate comprehensive and clinically appropriate neurology referral letters remains uncertain. This study aimed to systematically evaluate the completeness and quality of neurology referral letters generated by ChatGPT for standardized clinical scenarios. <i>Materials and Methods</i>: Five standardized clinical scenarios representing common neurological complaints encountered in family medicine settings (headache, memory problems, stroke/TIA, tremor, radiculopathy) were used. Using a consistent prompt, ChatGPT (GPT-4o, 2025 release) generated 10 referral letters per scenario (50 letters in total). A dual board-certified neurologist and family physician scored the letters using a 30-point rubric across multiple domains: completeness (demographics, chief complaint, history of present illness, physical exam findings, management, and consultation questions) and quality (language level, structure, and letter length). Descriptive statistics and inferential analyses (ANOVA and Kruskal-Wallis tests) were applied to assess performance across scenarios. <i>Results</i>: The mean total score was 25.76/30 (95% CI: 24.85-26.67). Completeness averaged 87%, while language and structure consistently scored above 90%. Content gaps appeared in 36 out of 50 letters (72%), mainly in the history of present illness and physical examination sections. Variability was observed across letters, though not statistically significant between scenarios (ANOVA: <i>F</i> = 1.14, <i>p</i> = 0.352; Kruskal-Wallis: <i>H</i> = 3.52, <i>p</i> = 0.475). <i>Conclusions</i>: ChatGPT produced neurology referral letters of high linguistic quality but variable completeness, especially for clinically complex content. The variability pattern among letters reflected model inconsistency rather than case type. The reliance on a single rater and use of a non-validated rubric represent limitations. Future studies should include multiple raters, inter-rater reliability testing, and validated scoring frameworks. Ultimately, access to tailored LLMs exclusively trained for medical documentation could improve outcomes while safeguarding patient privacy.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Healthcare Systems and TechnologyArtificial Intelligence in Healthcare and EducationSocial Media in Health Education
Volltext beim Verlag öffnen