Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI in critical care: A roadmap to the future
3
Zitationen
9
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) has the potential to revolutionize critical care medicine by enhancing patient care, improving resource allocation and reducing clinician workload. Despite this promise, many AI applications remain confined to scientific research rather than being integrated into everyday clinical practice. This manuscript aims to help intensivists prepare themselves and their intensive care units (ICUs) for AI implementation. It provides a comprehensive yet practical roadmap, detailing AI methods, applications, responsible AI principles, common roadblocks and implementation strategies. We propose a three-tiered risk-based approach to AI implementation, starting with low-risk low-complexity administrative AI, progressing to logistical AI, and finally integrating medical AI as clinical decision support systems. This ensures a gradual build-up of AI skills, technical AI readiness of the ICU, incremental value demonstration and alignment with evolving regulatory standards. For each AI project, responsible AI principles should be incorporated and adequately addressed throughout the entire AI lifecycle, from development to validation to implementation and scaling. Common roadblocks for AI implementation including technical issues (such as data quality and interoperability issues), organizational challenges (such as lack of a clear vision and strategy), and clinical concerns (such as limited AI literacy among staff), should be addressed proactively. By following this roadmap, ICUs can achieve sustainable AI integration, ultimately improving patient outcomes and clinician experience. The future of critical care lies in the responsible and strategic adoption of AI, with intensivists playing a central role in shaping its implementation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.
Autoren
Institutionen
- Erasmus MC(NL)
- KU Leuven(BE)
- Pontificia Universidad Católica de Chile(CL)
- Heinrich Heine University Düsseldorf(DE)
- The University of Texas Health Science Center at San Antonio(US)
- The University of Texas at San Antonio(US)
- Amsterdam University Medical Centers(NL)
- Amsterdam Neuroscience(NL)
- Radboud University Nijmegen(NL)
- Radboud University Medical Center(NL)
- Erasmus University Rotterdam(NL)