Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
1033 Assessing the Efficacy of Using ChatGPT-4 to Improve the Readability of Existing Patient Education Materials for Common Neurosurgical Conditions
0
Zitationen
13
Autoren
2025
Jahr
Abstract
INTRODUCTION: Existing neurosurgical patient education materials (PEMs) can be complex for the average American that reads at an eighth-grade level and may contribute to poor health literacy. Large language model chat bots may help to re-write existing PEMs to improve readability in a cost-effective manner. METHODS: Neurosurgical PEMs pertaining to stroke, laminectomy, pituitary tumors, epilepsy, and hydrocephalus published by the top 100 US hospitals as ranked by the U.S. News Health Report were collected. ChatGPT-4 was used to re-write 25 randomly selected PEMs at or near the reading level of the average American (eighth-grade reading level). Re-written PEMs were assessed using the following measures of reading level and difficulty: Flesch Kincaid Grade Level, Flesch Reading Ease (FRE), Gunning Fog Index (GFI), Automated Readability Index (ARI), Coleman-Liau Index, and the SMOG index readability score. The accuracy of all re-written PEMs was assessed by a senior neurosurgical resident. RESULTS: The mean FRE score for rewritten PEMs on each topic were significantly lower than non-rewritten materials (p<0.01) except spinal stenosis (p=0.104) and were validated for accuracy. For rewritten materials the mean Kincaid score was 7.58, the mean ARI was 9.53, the mean Coleman-Liau was 11.51, the mean GFI was 9.62, and the mean SMOG Index was 9.28. The ARI for rewritten hydrocephalus and Coleman-Liau score for pituitary tumors did not differ significantly compared to the original PEMs. All other comparisons indicated significantly improved readability of rewritten PEMs compared to original PEMs (p<0.05). CONCLUSIONS: Large language model chatbots, such as ChatGPT-4, can be used to efficiently re-write these PEMs at a lower reading level while maintaining the accuracy of the material.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.