Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Checklist for Artificial Intelligence in Medical Imaging Reporting Adherence in Peer-Reviewed and Preprint Manuscripts With the Highest Altmetric Attention Scores: A Meta-Research Study
17
Zitationen
6
Autoren
2022
Jahr
Abstract
<b>Purpose:</b> To establish reporting adherence to the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) in diagnostic accuracy AI studies with the highest Altmetric Attention Scores (AAS), and to compare completeness of reporting between peer-reviewed manuscripts and preprints. <b>Methods:</b> MEDLINE, EMBASE, arXiv, bioRxiv, and medRxiv were retrospectively searched for 100 diagnostic accuracy medical imaging AI studies in peer-reviewed journals and preprint platforms with the highest AAS since the release of CLAIM to June 24, 2021. Studies were evaluated for adherence to the 42-item CLAIM checklist with comparison between peer-reviewed manuscripts and preprints. The impact of additional factors was explored including body region, models on COVID-19 diagnosis and journal impact factor. <b>Results:</b> Median CLAIM adherence was 48% (20/42). The median CLAIM score of manuscripts published in peer-reviewed journals was higher than preprints, 57% (24/42) vs 40% (16/42), <i>P</i> < .0001. Chest radiology was the body region with the least complete reporting (<i>P</i> = .0352), with manuscripts on COVID-19 less complete than others (43% vs 54%, <i>P</i> = .0002). For studies published in peer-reviewed journals with an impact factor, the CLAIM score correlated with impact factor, rho = 0.43, <i>P</i> = .0040. Completeness of reporting based on CLAIM score had a positive correlation with a study's AAS, rho = 0.68, <i>P</i> < .0001. <b>Conclusions:</b> Overall reporting adherence to CLAIM is low in imaging diagnostic accuracy AI studies with the highest AAS, with preprints reporting fewer study details than peer-reviewed manuscripts. Improved CLAIM adherence could promote adoption of AI into clinical practice and facilitate investigators building upon prior works.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- Kingston General Hospital(CA)
- Kingston Health Sciences Centre(CA)
- University of Toronto(CA)
- Ottawa Hospital Research Institute
- Ottawa Hospital(CA)
- University of Ottawa(CA)
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin(DE)
- Freie Universität Berlin(DE)
- Humboldt-Universität zu Berlin(DE)
- The Scarborough Hospital(CA)
- Juravinski Hospital(CA)
- McMaster University(CA)
- Hamilton Health Sciences(CA)