Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Understanding Responsible Development in Artificial Intelligence-based Clinical Prediction Models (AIPM) that Prognosticate Mortality: A Scoping Review Protocol (Preprint)
0
Zitationen
4
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Prognostic inequity has been identified as a barrier to accessing end-of-life care for underrepresented groups. Artificial Intelligence using Clinical Prediction Models (AIPMs) prognosticating mortality have the potential to offer rapid, accessible, and accurate predictions that could streamline care. However, they may also exacerbate pre-existing inequities in the healthcare system, rather than addressing accessibility and quality. This can include erroneous outputs from biased training data, outcomes from out-of-scope operationalization, and inexplicability due to opacity. </sec> <sec> <title>OBJECTIVE</title> The goal of this study is to synthesize peer-reviewed literature on the creation and application of clinical prediction models using artificial intelligence to prognosticate mortality in acute care settings for adult patients, offering new insights into responsible and ethical model development. </sec> <sec> <title>METHODS</title> A transdisciplinary, structured search strategy was developed in consultation with librarians from both health sciences and engineering sciences. The academic databases queried were Medline, Embase, IEEE Xplore, ACM Digital Library, Compendex, and Scopus. The search was conducted in Spring 2025, and the results were uploaded to Covidence. A team of reviewers will screen in two rounds: title-abstract, then full-text. Eligibility will be determined by publication in academic journals or as full-length conference proceedings, language, model output, and AI use. Data will be charted using adapted charting tools and then analysed by descriptive summary and qualitative synthesis. </sec> <sec> <title>RESULTS</title> The search was completed on March 25, 2025, with screening starting in May 2025. Results are anticipated for January 2026. </sec> <sec> <title>CONCLUSIONS</title> This review will provide a comprehensive summary of AI clinical prediction models that output mortality, highlighting the specific elements included in their development. Informed by the Responsible Research and Innovation framework, this study will identify relevant elements, including interest holder engagement, interdisciplinary collaboration, and computational and clinical ethics, regarding AIPMs. These aspects will be relevant to understanding the responsible development of tools designed for mortality prediction in relation to anticipation, reflexivity, inclusion, and responsiveness. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.