Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Readability, Reliability, and Quality of Nursing Care Plan Texts Generated by Chatgpt
1
Zitationen
5
Autoren
2025
Jahr
Abstract
<title>Abstract</title> Background: Research on ChatGPT-supported nursing care plan texts plays a critical role in making nursing education more innovative and accessible. These studies strengthen education by improving the readability, reliability, and quality of the texts. Purpose: This study aims to evaluate the readability, reliability, and quality of nursing care plan texts generated by ChatGPT. Methods: The study sample consisted of 50 texts generated by ChatGPT based on selected nursing diagnoses from NANDA 2021–2023. These texts were evaluated using a descriptive criteria form, the DISCERN tool, and readability indices including the Flesch Reading Ease Score (FRES), Simple Measure of Gobbledygook (SMOG), Gunning Fog Index, and Flesch-Kincaid Grade Level (FKGL). Results: According to our findings, the readability level of the nursing care plans generated by ChatGPT was significantly higher than the recommended 6th-grade level (P < .001). The mean DISCERN score was 45.93 ± 4.72, indicating a moderate level of reliability for all evaluated texts. Additionally, 97.5% of the texts also achieved moderate scores on the information quality subscale. A positive and statistically significant correlation was found between the number of verifiable references and both the reliability (r = 0.408) and quality (r = 0.379) scores of the texts (P < .05). Conclusion: It should be noted that these AI-based chatbot tools cannot replace comprehensive patient care plans. In AI applications, it is recommended that the readability of generated content be improved, reliable references be included, and all outputs be reviewed by a professional team.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.