Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enhancing Patient Education with AI: A Readability Analysis of AI-Generated Versus American Academy of Ophthalmology Online Patient Education Materials
3
Zitationen
2
Autoren
2025
Jahr
Abstract
<b>Background/Objectives</b>: Patient education materials (PEMs) in ophthalmology often exceed recommended readability levels, limiting accessibility for many patients. While organizations like the AAO provide relatively easy-to-read resources, topics remain limited, and other associations' PEMs are too complex. AI chatbots could help clinicians create more comprehensive, accessible PEMs to improve patient understanding. This study aims to compare the readability of patient education materials (PEMs) written by the American Academy of Ophthalmology (AAO) with those generated by large language models (LLMs), including ChatGPT-4o, Microsoft Copilot, and Meta-Llama-3.1-70B-Instruct. <b>Methods</b>: LLMs were prompted to generate PEMs for 15 common diagnoses relating to cornea and anterior chamber, which was followed by a follow-up readability-optimized (FRO) prompt to reword the content at a 6th-grade reading level. The readability of these materials was evaluated using nine different readability analysis python libraries and compared to existing PEMs found on the AAO website. <b>Results</b>: For all 15 topics, ChatGPT, Copilot, and Llama successfully generated PEMs, though all exceeded the recommended 6th-grade reading level. While initially prompted ChatGPT, Copilot, and Llama outputs were 10.8, 12.2, and 13.2, respectively, FRO prompting significantly improved readability to 8.3 for ChatGPT, 11.2 for Copilot, and 9.3 for Llama (<i>p</i> < 0.001). While readability improved, AI-generated PEMs were on average, not statistically easier to read than AAO PEMs, which averaged an 8.0 Flesch-Kincaid Grade Level. <b>Conclusions</b>: Properly prompted AI chatbots can generate PEMs with improved readability, nearing the level of AAO materials. However, most outputs remain above the recommended 6th-grade reading level. A subjective analysis of a representative subtopic showed that compared to AAO, there was less nuance, especially in areas of clinical uncertainty. By creating a blueprint that can be utilized in human-AI hybrid workflows, AI chatbots show promise as tools for ophthalmologists to increase the availability of accessible PEMs in ophthalmology. Future work should include a detailed qualitative review by ophthalmologists using a validated tool (like DISCERN or PEMAT) to score accuracy, bias, and completeness alongside readability.
Ähnliche Arbeiten
Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
2004 · 6.079 Zit.
The content validity index: Are you sure you know what's being reported? critique and recommendations
2006 · 6.021 Zit.
Health literacy and public health: A systematic review and integration of definitions and models
2012 · 5.778 Zit.
Low Health Literacy and Health Outcomes: An Updated Systematic Review
2011 · 5.177 Zit.
Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century
2000 · 4.907 Zit.