Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Investigation and evaluation of randomized controlled trials for interventions involving artificial intelligence
7
Zitationen
19
Autoren
2021
Jahr
Abstract
Objective Complete and transparent reporting is of critical importance for randomized controlled trials (RCTs). The present study aimed to determine the reporting quality and methodological quality of RCTs for interventions involving artificial intelligence (AI) and their protocols. Methods We searched MEDLINE (via PubMed), Embase, Web of Science, CBMdisc, Wanfang Data, and CNKI from January 1, 2016, to November 11, 2020, to collect RCTs involving AI. We also extracted the protocol of each included RCT if it could be obtained. CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) statement and Cochrane Collaboration's tool for assessing risk of bias (ROB) were used to evaluate the reporting quality and methodological quality, respectively, and SPIRIT-AI (The Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence) statement was used to evaluate the reporting quality of the protocols. The associations of the reporting rate of CONSORT-AI with the publication year, journal's impact factor (IF), number of authors, sample size, and first author's country were analyzed univariately using Pearson's chi-squared test, or Fisher's exact test if the expected values in any of the cells were below 5. The compliance of the retrieved protocols to SPIRIT-AI was presented descriptively. Results Overall, 29 RCTs and three protocols were considered eligible. The CONSORT-AI items “title and abstract” and “interpretation of results” were reported by all RCTs, with the items with the lowest reporting rates being “funding” (0), “implementation” (3.5%), and “harms” (3.5%). The risk of bias was high in 13 (44.8%) RCTs and not clear in 15 (51.7%) RCTs. Only one RCT (3.5%) had a low risk of bias. The compliance was not significantly different in terms of the publication year, journal's IF, number of authors, sample size, or first author's country. Ten of the 35 SPIRIT-AI items (funding, participant timeline, allocation concealment mechanism, implementation, data management, auditing, declaration of interests, access to data, informed consent materials and biological specimens) were not reported by any of the three protocols. Conclusions The reporting and methodological quality of RCTs involving AI need to be improved. Because of the limited availability of protocols, their quality could not be fully judged. Following the CONSORT-AI and SPIRIT-AI statements and with appropriate guidance on the risk of bias when designing and reporting AI-related RCTs can promote standardization and transparency.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.
Autoren
Institutionen
- Lanzhou University(CN)
- University of Bern(CH)
- University of Geneva(CH)
- McMaster University(CA)
- Impact(CA)
- Iberoamerican Cochrane Centre(ES)
- Post Graduate Institute of Medical Education and Research(IN)
- Cochrane(KR)
- Korea University(KR)
- National Evidenc- based healthcare Collaborating Agency(KR)
- Korea Institute of Oriental Medicine(KR)
- Korea University of Science and Technology(KR)
- Tianjin University of Traditional Chinese Medicine(CN)
- London South Bank University(GB)