Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Necessity and Feasibility Assessment Tool of the Clinical Prediction Model for Individual Prognosis Before Its Startup: A Multi‐Sectoral Delphi Consensus Study
0
Zitationen
19
Autoren
2025
Jahr
Abstract
OBJECTIVE: The overwhelming majority of prediction models have not been applied. An evidence-based review is needed to show that the new research is justified. This study aimed to develop an assessment tool for researchers and peer reviewers to conduct a rapid and comprehensive evaluation on the necessity and feasibility of planning clinical prediction model before its startup. METHODS: The framework for developing quality assessment tools was followed to develop the necessity and Feasibility Assessment Tool of CLInical Prediction models for individual prognosis (FATCLIP). Firstly, the scope, framework, and item pool of the FATCLIP was identified by a steering group comprising 15 experts through a web-based meeting. Then, an iterative Delphi process was conducted to refine the FATCLIP, in which the Delphi group enrolled 34 experts from multidiscipline, including epidemiologists, statisticians, clinicians, evidence-based medicine specialists, health care administrators and academic journal editors. RESULTS: Through twice steering group meetings and 2 rounds of the Delphi process, the framework of FATCLIP was determined based on expert consensus, including 6 domains and 31 signaling questions. The six domains were as follows: prediction outcome, review of existing models, candidate predictors, data, development and validation, and application and extension. At the same time, the usage manual of FATCLIP was also presented. CONCLUSIONS: The FATCILP aims to assist researchers and peer reviewers to detect potential challenges during the development and application of the clinical prediction model for individual prognosis before its start-up, so that the research of clinical prediction models could be efficient and avoid research waste.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.553 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.444 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.943 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
Institutionen
- Beijing Children’s Hospital(CN)
- Peking University(CN)
- Peking University Third Hospital(CN)
- Capital Medical University(CN)
- University of Calgary(CA)
- Provincial Laboratory of Public Health(CA)
- Chinese Academy of Medical Sciences & Peking Union Medical College(CN)
- Lanzhou University(CN)
- Fudan University(CN)
- Zhongshan Hospital(CN)
- Jilin University(CN)
- Jilin International Studies University(CN)
- First Affiliated Hospital of Jinan University(CN)
- Sichuan University(CN)
- West China Hospital of Sichuan University(CN)
- Wuhan University(CN)
- Johnson & Johnson (United States)(US)
- Johnson & Johnson (United Kingdom)(GB)
- Soochow University(CN)
- Science and Technology Department of Sichuan Province(CN)
- Sichuan Academy of Traditional Chinese Medicine(CN)
- Chengdu University of Traditional Chinese Medicine(CN)
- Huashan Hospital(CN)