Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Collaborative large language models for automated data extraction in living systematic reviews
31
Zitationen
21
Autoren
2025
Jahr
Abstract
OBJECTIVE: Data extraction from the published literature is the most laborious step in conducting living systematic reviews (LSRs). We aim to build a generalizable, automated data extraction workflow leveraging large language models (LLMs) that mimics the real-world 2-reviewer process. MATERIALS AND METHODS: A dataset of 10 trials (22 publications) from a published LSR was used, focusing on 23 variables related to trial, population, and outcomes data. The dataset was split into prompt development (n = 5) and held-out test sets (n = 17). GPT-4-turbo and Claude-3-Opus were used for data extraction. Responses from the 2 LLMs were considered concordant if they were the same for a given variable. The discordant responses from each LLM were provided to the other LLM for cross-critique. Accuracy, ie, the total number of correct responses divided by the total number of responses, was computed to assess performance. RESULTS: In the prompt development set, 110 (96%) responses were concordant, achieving an accuracy of 0.99 against the gold standard. In the test set, 342 (87%) responses were concordant. The accuracy of the concordant responses was 0.94. The accuracy of the discordant responses was 0.41 for GPT-4-turbo and 0.50 for Claude-3-Opus. Of the 49 discordant responses, 25 (51%) became concordant after cross-critique, increasing accuracy to 0.76. DISCUSSION: Concordant responses by the LLMs are likely to be accurate. In instances of discordant responses, cross-critique can further increase the accuracy. CONCLUSION: Large language models, when simulated in a collaborative, 2-reviewer workflow, can extract data with reasonable performance, enabling truly "living" systematic reviews.
Ähnliche Arbeiten
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
2021 · 89.405 Zit.
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement
2009 · 83.030 Zit.
The Measurement of Observer Agreement for Categorical Data
1977 · 77.780 Zit.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement
2009 · 63.396 Zit.
Measuring inconsistency in meta-analyses
2003 · 62.060 Zit.
Autoren
- Muhammad Ali Khan
- Umair Ayub
- Syed Arsalan Ahmed Naqvi
- Kaneez Zahra Rubab Khakwani
- Zaryab bin Riaz Sipra
- Ammad Raina
- Sihan Zhou
- Huan He
- Amir Saeidi
- Bashar Hasan
- R. Bryan Rumble
- Danielle S. Bitterman
- Jeremy L. Warner
- Jia Zou
- Amyé Tevaarwerk
- Konstantinos Leventakos
- Kenneth L. Kehl
- Jeanne Palmer
- M. Hassan Murad
- Chitta Baral
- Irbaz Bin Riaz
Institutionen
- WinnMed(US)
- Mayo Clinic in Florida(US)
- University of Arizona(US)
- Rashid Latif Medical College(PK)
- Yale University(US)
- Arizona State University(US)
- Mayo Clinic in Arizona(US)
- American Society of Clinical Oncology(US)
- Dana-Farber Cancer Institute(US)
- Providence College(US)
- Brown University(US)
- Rhode Island Hospital(US)