Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparative analysis of nursing care plans produced by artificial intelligence models (ChatGPT, Gemini, and DeepSeek) in terms of readability, reliability, and quality
0
Zitationen
2
Autoren
2026
Jahr
Abstract
While AI chatbots have increased access to healthcare information, evidence regarding the readability, reliability, and overall quality of nursing care plans generated by these systems remains limited. This study aimed to comparatively evaluate nursing care plan texts generated by ChatGPT, Gemini, and DeepSeek in terms of readability, reliability, and overall quality. Thirty nursing diagnoses were randomly selected from the NANDA International 2021–2023 taxonomy. For each diagnosis, nursing care plans were generated using three AI chatbots, resulting in 90 texts. Outputs were comparatively evaluated using a descriptive information form, the DISCERN instrument, and multiple readability measures (FRES, SMOG, Gunning Fog Index, and Flesch–Kincaid Grade Level). Readability analyses indicated that nursing care plans generated by all three AI models significantly exceeded the recommended sixth-grade reading level (P < .001). DISCERN scores reflected moderate reliability, with mean scores of 57.41 ± 5.9 for ChatGPT, 58.41 ± 4.8 for Gemini, and 56.51 ± 6.8 for DeepSeek. Overall, 27 texts (90%) were rated as providing nursing care information of moderate quality. The presence of verifiable references demonstrated a statistically significant positive association with both reliability and quality scores (P < .05). Although AI chatbots demonstrate potential as supportive tools in nursing education and documentation, they should not be used as standalone resources for generating complete nursing care plans without professional review. Improvements in content clarity, reference accuracy, and expert oversight are necessary to enhance their applicability in nursing practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.