Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How good are large language models for automated data extraction from randomized trials?
13
Zitationen
7
Autoren
2024
Jahr
Abstract
Abstract In evidence synthesis, data extraction is a crucial procedure, but it is time intensive and prone to human error. The rise of large language models (LLMs) in the field of artificial intelligence (AI) offers a solution to these problems through automation. In this case study, we evaluated the performance of two prominent LLM-based AI tools for use in automated data extraction. Randomized trials from two systematic reviews were used as part of the case study. Prompts related to each data extraction task (e.g., extract event counts of control group) were formulated separately for binary and continuous outcomes. The percentage of correct responses ( Pcorr ) was tested in 39 randomized controlled trials reporting 10 binary outcomes and 49 randomized controlled trials reporting one continuous outcome. The Pcorr and agreement across three runs for data extracted by two AI tools were compared with well-verified metadata. For the extraction of binary events in the treatment group across 10 outcomes, the Pcorr ranged from 40% to 87% and from 46% to 97% for ChatPDF and for Claude, respectively. For continuous outcomes, the Pcorr ranged from 33% to 39% across six tasks (Claude only). The agreement of the response between the three runs of each task was generally good, with Cohen’s kappa statistic ranging from 0.78 to 0.96 and from 0.65 to 0.82 for ChatPDF and Claude, respectively. Our results highlight the potential of ChatPDF and Claude for automated data extraction. Whilst promising, the percentage of correct responses is still unsatisfactory and therefore substantial improvements are needed for current AI tools to be adopted in research practice. Highlights 1. What is already known In evidence synthesis, data extraction is a crucial procedure, but it is time intensive and prone to human error, with reported data extraction error rates at meta-analyses level reaching up to 67%. The rise of large language models (LLMs) in the field of artificial intelligence (AI) offers a solution to these problems through automation. 2. What is new In this case study, we investigated the performance of two AI tools for data extraction and confirmed that AI tools can reach the same or better performance than humans in terms of data extraction from randomized trials for binary outcomes. However, AI tools performed poorly at extracting data from continuous outcomes. 3. Potential impact for Research Synthesis Methods readers outside the authors’ field Our study suggests LLMs have great potential in assisting data extraction in evidence syntheses through (semi-)automation. Further efforts are needed to improve accuracy, especially for continuous outcomes data.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- Nanjing University of Posts and Telecommunications(CN)
- Qatar University(QA)
- The University of Queensland(AU)
- Inserm(FR)
- Université Paris Cité(FR)
- Centre de Recherche Épidémiologie et Statistique(FR)
- University of Arizona(US)
- Second Military Medical University(CN)
- Anhui Medical University(CN)
- Eastern Hepatobiliary Surgery Hospital(CN)