Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Readability, reliability, and quality of nursing care plan texts generated by ChatGPT
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Nursing care plans require clinical reasoning, prioritization, and patient-centered decision-making, which distinguishes them from more general AI-generated educational texts. As large language models such as ChatGPT are increasingly used to support nursing education and care planning, it is essential to evaluate the readability, reliability, and quality of the nursing care plans they produce. This study aims to evaluate the readability, reliability, and quality of nursing care plan texts generated by ChatGPT. The study sample consisted of 50 texts generated by ChatGPT (version 4.0) based on selected nursing diagnoses from NANDA 2021–2023. These texts were evaluated using a descriptive criteria form, the DISCERN tool, and readability indices including the Flesch Reading Ease Score (FRES), Simple Measure of Gobbledygook (SMOG), Gunning Fog Index, and Flesch-Kincaid Grade Level (FKGL). The analysis demonstrated that the nursing care plans generated by ChatGPT showed a moderate level of quality and reliability. However, the readability levels were generally higher than what is desirable for clinical and educational use, indicating that the texts may be difficult for some users to understand without adaptation. The findings also suggest that the presence of verifiable references contributes positively to the overall quality and reliability of the generated care plans. Evaluating the readability, reliability, and quality of AI-generated nursing care plans is essential for ensuring their safe and meaningful use in nursing education and clinical practice. These findings highlight the importance of guiding and refining AI-supported care planning to better align with professional standards and patient-centered care needs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.