Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Using Explainable Artificial Intelligence in a Systematic Literature Review of Pressure Injury Prevention
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) is rapidly transforming health care by augmenting clinical decision-making and enabling clinicians and researchers to perform literature searches, including systematic literature reviews (SLR). This article describes the methods used to develop an AI-generated SLR, and the lessons learned by the research team (comprised of PI content and AI experts) while completing this project. To generate the SLR, a proprietary explainable AI (XAI) platform was used incorporating generative-discriminative algorithms and reinforcement learning with human feedback. The following research question was posed: "What are best practices for pressure injury prevention in hospitalized patients?" Content experts defined and iteratively refined search parameters and exclusion criteria. The XAI screened 1414 records, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses 2020 guidelines. After study selection, content experts reviewed the draft SLR for citation accuracy, synthesis quality, and clinical validity. This process yielded 110 studies. Among these, 33 studies were originally excluded but were re-incorporated after content expert input. The AI-generated SLR paper contained multiple citation errors and misinterpretations. Narrative quality was mechanical, with unsupported generalizations and factual inaccuracies. We found that content experts are critical to determine the correct search terms and interpret AI-generated results. Similarly, collaboration with AI experts is necessary to improve understanding of AI applications. The need for a detailed review of any AI-generated SLR is essential to ensure evidence fidelity.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.490 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.376 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.832 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.553 Zit.