Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Quality and Readability of Patient Educational Materials Generated by ChatGPT-4o for Pediatric Ophthalmologic Surgeries
2
Zitationen
7
Autoren
2025
Jahr
Abstract
PURPOSE: To evaluate the quality and readability of ChatGPT-4o-generated (ChatGPT) (OpenAI) patient education materials (PEMs) about pediatric ophthalmologic surgical procedures and compare these to PEMs from the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) website. METHODS: The authors prompted ChatGPT-4o to provide PEMs on four procedures-strabismus surgery without adjustable sutures, strabismus surgery with adjustable sutures, pediatric cataract surgery, and nasolacrimal duct probing. The prompt requested responses at a 6th grade level in both Spanish and English. English ChatGPT responses were compared to AAPOS PEMs on quality (using the Quality of Generated Language Outputs for Patients [QGLOP] scale) and readability. English and Spanish ChatGPT responses were also compared on quality and readability. RESULTS: = .042, respectively). There was no significant difference in readability between AAPOS PEMs and English ChatGPT responses. English and Spanish ChatGPT responses did not significantly differ on quality or readability. CONCLUSIONS: ChatGPT-4o-generated PEMs on pediatric ophthalmologic surgical conditions are currently inferior in quality to PEMs on the AAPOS website. However, because ChatGPT is continually being updated and trained, this study should be repeated in the future to determine whether metrics improve over time.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.