Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Effect of TRIPOD+AI Guidelines on the Reporting Quality of Artificial Intelligence Prediction Models in Orthopaedic Surgery: An 18-Month Bibliometric Study
0
Zitationen
1
Autoren
2025
Jahr
Abstract
The TRIPOD+AI (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis plus Artificial Intelligence extension), published in April 2024, provides guidance for transparent reporting of artificial intelligence (AI)-based prediction models. It provides specific guidance for items to include in abstracts in this field. This study evaluated whether reporting quality in orthopaedic AI prediction model abstracts improved following the publication of TRIPOD+AI guidelines. We searched PubMed for English-language studies evaluating AI prediction models in orthopaedics across two 18-month periods: pre-TRIPOD+AI (October 2022 to April 2024) and post-TRIPOD+AI (April 2024 to October 2025). Abstract compliance was assessed against four TRIPOD+AI criteria: performance measure specification (Item 8), sample size and outcome events (Item 9), performance estimates with confidence intervals (Item 11), and study registration (Item 13). Reporting frequencies were compared using chi-squared tests. Among 522 eligible studies (pre-TRIPOD+AI=214, post-TRIPOD+AI=308), reporting of performance measures remained high (96.7% vs 98.4%, p=0.35). Full compliance with Item 9 showed a non-significant increase (32.7% to 39.9%, p=0.11). Reporting of outcome events increased from 36.0% to 44.5% (p=0.06), while participant number reporting declined from 82.2% to 75.0% (p=0.06). Confidence interval reporting remained low (18.7% vs 16.6%, p=0.61), and study registration was nearly absent (0.5% vs 1.0%, p=0.89). No abstract met all four criteria. Eighteen months after its publication, TRIPOD+AI has not measurably improved reporting quality in orthopaedic AI abstracts. Confidence interval reporting and study registration remain particularly deficient. These findings suggest that guideline dissemination alone may be insufficient and that active journal-level implementation strategies may be needed to improve reporting standards.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.