OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 25.04.2026, 01:34

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Large language models improve readability of patient education materials on vascular conditions

2025·2 Zitationen·JVS-Vascular InsightsOpen Access
Volltext beim Verlag öffnen

2

Zitationen

8

Autoren

2025

Jahr

Abstract

<h2>ABSTRACT</h2><h3>Objective</h3> Patient education materials frequently exceed the recommended sixth-grade reading level. While large language models (LLMs) have shown inconsistent accuracy in medical query responses, they have demonstrated promise in simplifying complex text. This capability has not yet been studied in vascular patient education materials. This study evaluates whether ChatGPT-4o and Gemini 1.5 Pro can improve the readability of Society for Vascular Surgery (SVS) patient education flyers. <h3>Methods</h3> SVS health flyers were selected based on five common vascular conditions: abdominal aortic aneurysm (AAA), carotid artery disease (CAD), deep vein thrombosis (DVT), peripheral artery disease (PAD), and varicose veins (VV). Each flyer was submitted to ChatGPT-4o and Gemini 1.5 Pro, which generated simplified versions using structured Extensible Markup Language (XML) prompts to guide consistent editing. Vascular surgeons, who were blinded to the source of each flyer, independently scored the original and LLM-modified flyers on accuracy, comprehensiveness, and understandability using a 0–10 Likert scale. Readability was assessed using the Average Reading Level Consensus tool, and textual features—including word count, sentence count, syllables per word, and percentage of complex words—were quantified. Paired t-tests were used to analyze differences in readability scores. ANOVA with Tukey HSD post hoc testing was used to assess textual characteristics. <h3>Results</h3> The original SVS flyers had an average reading grade level of 10.61 (SD = 0.88). Gemini and ChatGPT-4o significantly reduced the reading level to 8.18 (SD = 1.24, p=0.012) and 8.37 (SD = 0.88, p=0.00013), respectively. SVS flyers averaged 605 words, 29.8 sentences, 1.7 syllables per word, and 20.4% complex words. Both LLMs significantly reduced syllables per word (Gemini: 1.52, p < 0.0001; ChatGPT: 1.53, p < 0.0001) and the proportion of complex words (Gemini: 12.7%, p < 0.0001; ChatGPT: 13.6%, p < 0.0001). There were no significant differences between the Gemini and ChatGPT outputs in readability or textual metrics. Physician scores for accuracy, comprehensiveness, and understandability showed no significant differences between SVS and either LLM model, nor between the two LLMs. <h3>Conclusions</h3> LLMs significantly improved the readability of SVS patient education materials by approximately two grade levels without compromising content accuracy. These findings support the use of LLMs to enhance the accessibility of medical information when grounded in trusted source material, rather than relying on unprompted content generation.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Text Readability and SimplificationArtificial Intelligence in Healthcare and EducationHealth Literacy and Information Accessibility
Volltext beim Verlag öffnen