Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Sensitivity, specificity and avoidable workload of using a large language models for title and abstract screening in systematic reviews and meta-analyses
8
Zitationen
11
Autoren
2023
Jahr
Abstract
Abstract Importance Systematic reviews are time-consuming and are still performed predominately manually by researchers despite the exponential growth of scientific literature. Objective To investigate the sensitivity, specificity and estimate the avoidable workload when using an AI-based large language model (LLM) (Generative Pre-trained Transformer [GPT] version 3.5-Turbo from OpenAI) to perform title and abstract screening in systematic reviews. Data Sources Unannotated bibliographic databases from five systematic reviews conducted by researchers from Cochrane Austria, Germany and France, all published after January 2022 and hence not in the training data set from GPT 3.5-Turbo. Design We developed a set of prompts for GPT models aimed at mimicking the process of title and abstract screening by human researchers. We compared recommendations from LLM to rule out citations based on title and abstract with decisions from authors, with a systematic reappraisal of all discrepancies between LLM and their original decisions. We used bivariate models for meta-analyses of diagnostic accuracy to estimate pooled estimates of sensitivity and specificity. We performed a simulation to assess the avoidable workload from limiting human screening on title and abstract to citations which were not “ruled out” by the LLM in a random sample of 100 systematic reviews published between 01/07/2022 and 31/12/2022. We extrapolated estimates of avoidable workload for health-related systematic reviews assessing therapeutic interventions in humans published per year. Results Performance of GPT models was tested across 22,666 citations. Pooled estimates of sensitivity and specificity were 97.1% (95%CI 89.6% to 99.2%) and 37.7%, (95%CI 18.4% to 61.9%), respectively. In 2022, we estimated the workload of title and abstract screening for systematic reviews to range from 211,013 to 422,025 person-hours. Limiting human screening to citations which were not “ruled out” by GPT models could reduce workload by 65% and save up from 106,268 to 276,053-person work hours (i.e.,66 to 172-person years of work), every year. Conclusions and Relevance AI systems based on large language models provide highly sensitive and moderately specific recommendations to rule out citations during title and abstract screening in systematic reviews. Their use to “triage” citations before human assessment could reduce the workload of evidence synthesis.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
Institutionen
- Assistance Publique – Hôpitaux de Paris(FR)
- Centre de Recherche Épidémiologie et Statistique(FR)
- Hôtel-Dieu de Paris(FR)
- Université Paris Cité(FR)
- Inserm(FR)
- Institut National de Recherche pour l'Agriculture, l'Alimentation et l'Environnement(FR)
- Universität für Weiterbildung Krems(AT)
- RTI International(US)
- University of Freiburg(DE)
- Columbia University(US)