Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Readability of Hospital Online Patient Education Materials Across Otolaryngology Specialties
2
Zitationen
8
Autoren
2025
Jahr
Abstract
Introduction: This study evaluates the readability of online patient education materials (OPEMs) across otolaryngology subspecialties, hospital characteristics, and national otolaryngology organizations, while assessing AI alternatives. Methods: Hospitals from the US News Best ENT list were queried for OPEMs describing a chosen surgery per subspecialty; the American Academy of Otolaryngology-Head and Neck Surgery (AAO), American Laryngological Association (ALA), Ear, Nose, and Throat United Kingdom (ENTUK), and the Canadian Society of Otolaryngology-Head and Neck Surgery (CSOHNS) were similarly queried. Google was queried for the top 10 links from hospitals per procedure. Ownership (private/public), presence of respective otolaryngology fellowships, region, and median household income (zip code) were collected. Readability was assessed using seven indices and averaged: Automated Readability Index (ARI), Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Readability (GFR), Simple Measure of Gobbledygook (SMOG), Coleman-Liau Readability Index (CLRI), and Linsear Write Readability Formula (LWRF). AI-generated materials from ChatGPT were compared for readability, accuracy, content, and tone. Analyses were conducted between subspecialties, against national organizations, NIH standard, and across demographic variables. Results: = 0.005). ChatGPT-generated materials averaged a 6.8-grade level, demonstrating improved readability, especially with specialized prompting, compared to all hospital and organization OPEMs. Conclusion: OPEMs from all sources exceed the NIH readability standard. ENTUK serves as a benchmark for accessible language, while ChatGPT demonstrates the feasibility of producing more readable content. Otolaryngologists might consider using ChatGPT to generate patient-friendly materials, with caution, and advocate for national-level improvements in patient education readability.
Ähnliche Arbeiten
Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
2004 · 6.200 Zit.
The content validity index: Are you sure you know what's being reported? critique and recommendations
2006 · 6.195 Zit.
Health literacy and public health: A systematic review and integration of definitions and models
2012 · 5.897 Zit.
Low Health Literacy and Health Outcomes: An Updated Systematic Review
2011 · 5.270 Zit.
Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century
2000 · 4.984 Zit.