Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comment on ‘Knowledge and Opinions of Operating Room Nurses About Artificial Intelligence: A Descriptive Cross‐Sectional Study’
0
Zitationen
1
Autoren
2026
Jahr
Abstract
We read with great interest the study by Durukan et al. investigating operating room nurses' knowledge and opinions of artificial intelligence (AI) (Durukan et al. 2025). This work fills a critical gap by focusing on a specialised nursing group, revealing high AI knowledge levels, reliance on social media for information, ethical concerns, and positive perceptions of AI's potential to reduce workload and improve care quality. Its descriptive design and clear focus provide valuable baseline data for AI integration in operating rooms. However, key gaps merit concise discussion to strengthen practical translation. First, the study identifies social media as the primary (yet unreliable) AI information source but does not explore underlying reasons. Abdullah et al. note that healthcare professionals often turn to informal sources when formal training is unavailable (Abdullah and Fakieh 2020), suggesting a lack of structured AI education for operating room nurses. This oversight limits targeted solutions—understanding whether barriers are resource scarcity, time constraints, or irrelevant training content is critical to designing effective interventions. Second, ethical concerns (e.g., lack of empathy, malfunctions) are mentioned but lack contextual depth. Rony et al. emphasise that nurses' ethical worries are often tied to clinical scenarios (e.g., AI misinterpreting surgical data) (Rony et al. 2024), yet the study does not link concerns to specific operating room workflows. This makes it difficult to address fears in practice, as interventions must align with real-world use cases. Third, the single-centre private hospital sample limits generalizability. Lora and Foran's integrative review highlights significant variability in nurses' AI perceptions across healthcare settings (Lora and Fo 2024), but this study provides no data on public hospitals, different regions or varying hospital sizes. Without such comparisons, results cannot be broadly applied to diverse operating room contexts. Fourth, training recommendations are overly vague. Karaarslan et al. demonstrate that targeted AI training improves nurses' attitudes and practical confidence (Karaarslan et al. 2024), but the study does not specify what training content (e.g., AI ethics, clinical application scenarios, technical operation) is most needed. This ambiguity hinders the development of actionable training programs. In conclusion, Durukan et al.'s study lays important groundwork for understanding operating room nurses' AI-related cognition. Addressing information source barriers, contextualising ethical concerns, expanding sample representativeness, and clarifying training needs will enhance the utility of findings. Future research incorporating these elements can better support AI's responsible integration into operating room nursing practice. The author has nothing to report. The author has nothing to report. The author has nothing to report. The author declares no conflicts of interest. Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.