Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Application of comprehensive evaluation framework to Coronavirus Disease 19 studies: A systematic review of translational aspects of artificial intelligence in health care
1
Zitationen
10
Autoren
2023
Jahr
Abstract
Abstract Background Despite immense progress in artificial intelligence (AI) models, there has been limited deployment in healthcare environments. The gap between potential and actual AI applications is likely due to the lack of translatability between controlled research environments (where these models are developed) and clinical environments for which the AI tools are ultimately intended. Objective We have previously developed the Translational Evaluation of Healthcare AI (TEHAI) framework to assess the translational value of AI models and to support successful transition to healthcare environments. In this study, we apply the TEHAI to COVID-19 literature in order to assess how well translational topics are covered. Methods A systematic literature search for COVID-AI studies published between December 2019-2020 resulted in 3,830 records. A subset of 102 papers that passed inclusion criteria were sampled for full review. Nine reviewers assessed the papers for translational value and collected descriptive data (each study was assessed by two reviewers). Evaluation scores and extracted data were compared by a third reviewer for resolution of discrepancies. The review process was conducted on the Covidence software platform. Results We observed a significant trend for studies to attain high scores for technical capability but low scores for the areas essential for clinical translatability. Specific questions regarding external model validation, safety, non-maleficence and service adoption received failed scores in most studies. Conclusions Using TEHAI, we identified notable gaps in how well translational topics of AI models are covered in the COVID-19 clinical sphere. These gaps in areas crucial for clinical translatability could, and should, be considered already at the model development stage to increase translatability into real COVID-19 healthcare environments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.