OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 21:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

ENHANCING INFORMED CONSENT IN ORTHOPAEDIC SURGERY: A PROOF-OF-CONCEPT STUDY OF LANGUAGE MODEL-GENERATED CLINIC LETTERS

2025·0 Zitationen·Orthopaedic Proceedings
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Effective communication is vital for informed consent in clinical practice, especially since the Montgomery v Lanarkshire ruling, yet NHS resource pressures demand more efficient patient–clinician interactions. We therefore evaluated four large language models—ChatGPT-O1, Deepseek, Gemini and Copilot—by asking each to draft two clinic letters for six common elective orthopaedic operations, using uniform, patient-friendly prompts that mirrored real consultations. Across the 48 letters generated, GPT-O1 most faithfully incorporated the gold-standard complication profile for each procedure (compliance 0.923 ± 0.104, P < 0.001), with Gemini next at 0.860 ± 0.079. All models produced drafts pitched at a roughly 7th-to-8th-grade reading level according to composite Flesch-Kincaid scores (range 6.850–8.517). On secondary readability metrics, Gemini's outputs were judged easiest to digest, achieving the lowest Gunning-Fog and SMOG grades (10.657 ± 1.291 and 11.583 ± 1.191, respectively; P < 0.001), whereas GPT-O1's letters were comparatively dense (8.830 ± 0.570 and 9.983). Understandability assessed by three blinded clinicians using the Patient-Education Materials Assessment Tool also favoured Gemini (PEMAT 0.826 ± 0.054) over GPT-O1 (0.718 ± 0.071, P < 0.005). Collectively, these results suggest that contemporary LLMs can produce concise, comprehensible clinic correspondence that preserves essential risk information, thereby supporting shared decision-making while lessening clerical load. Future research should explore integrating real-time electronic health-record data, refining model retrieval to capture uncommon complications, and developing governance frameworks that address data privacy, clinical accountability and bias, so that this technology can be responsibly embedded in routine orthopaedic practice. In parallel, stakeholder education about the capabilities and limitations of generative AI will be crucial to maintain patient trust and ensure clinicians remain central arbiters of personalised expert surgical advice.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Patient-Provider Communication in HealthcareArtificial Intelligence in Healthcare and EducationHealth Literacy and Information Accessibility
Volltext beim Verlag öffnen