Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reflections on “Developing and Evaluating the Use of ChatGPT as a Screening Tool for Nurses Conducting Structured Literature Reviews: Proof of Concept Study Results”
0
Zitationen
2
Autoren
2025
Jahr
Abstract
We read with great interest the recent article by Mudd et al. entitled “Developing and Evaluating the Use of ChatGPT as a Screening Tool for Nurses Conducting Structured Literature Reviews: Proof of Concept Study Results,” published in the Journal of Clinical Nursing (2025). This innovative study represents an important milestone in exploring the integration of artificial intelligence (AI) into nursing research. The authors deserve recognition for addressing one of the most time-consuming and cognitively demanding stages of evidence synthesis, the screening of large volumes of literature. Their work provides valuable empirical evidence to inform how AI can support and augment nursing scholarship rather than replace human expertise. We also note that a previous short correspondence by Daungsupawong and Wiwanitkit (2025) raised methodological points about this study. Our reflection complements their discussion by extending the analysis to ethical, educational, and cross-disciplinary dimensions that are equally critical to the responsible integration of AI in nursing science. The study's methodological transparency and comparative analysis of large language model (LLM) versions enhance its credibility. Particularly notable is the finding that GPT-3.5 Turbo outperformed newer models, underscoring the need to understand how prompt structure and model architecture influence the interpretation of complex nursing content. Nevertheless, several points merit further attention to guide future research. The dataset was confined to a single thematic area, public involvement in nursing education which limits generalizability to broader domains such as patient safety, clinical communication, and ethics. Expanding datasets across multiple nursing contexts could strengthen external validity and applicability. Moreover, relying on a single reviewer as the “gold standard” introduces potential subjectivity; employing two independent reviewers with consensus resolution would enhance methodological rigor and benchmarking reliability. The absence of direct comparison with established semi-automated screening tools, such as Rayyan (Ouzzani et al. 2016) and ASReview (Van de Schoot et al. 2023), restricts the analytical depth of the study. Incorporating these platforms in future work could clarify ChatGPT's relative strengths and limitations. A hybrid design that combines the predictive efficiency of classical machine-learning models with the interpretative depth of LLMs may provide an optimal balance between accuracy and contextual sensitivity. Ethical and governance considerations are also critical for the responsible adoption of AI in nursing research. Nursing knowledge is inherently relational and contextual aspects that AI systems may not fully capture. The World Health Organization (WHO) has issued guidance emphasizing transparency, accountability, and equity in the deployment of large multimodal AI systems in health (World Health Organization (WHO) 2024). Aligning future nursing–AI studies with such frameworks would help ensure that innovation reinforces professional integrity and patient-centered care rather than undermining them. Finally, while the authors briefly discuss educational implications, the pedagogical potential of AI in fostering self-directed learning and reflective practice warrants deeper exploration. Integrating AI literacy into nursing curricula could enhance researchers' capacity to use these tools critically and ethically. In conclusion, Mudd et al. provide an insightful and forward-looking contribution to the responsible integration of AI in nursing. Their study opens the door to further interdisciplinary collaborations among nurses, data scientists, and ethicists. Some of the methodological refinements discussed here could be considered, if appropriate, before the article's final publication, while broader conceptual issues might be acknowledged as limitations or proposed for future research. We congratulate the authors for their pioneering effort and share their vision of a collaborative future where human expertise and artificial intelligence work synergistically to advance nursing scholarship. The authors declare no conflicts of interest. No new data were generated or analysed in this study. Therefore, data sharing is not applicable.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.