Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Batch Size Effects on Mid‐2025 State‐of‐the‐Art Large Language Model Performance in Automated Title and Abstract Screening
0
Zitationen
7
Autoren
2026
Jahr
Abstract
Background: Manual abstract screening is a primary bottleneck in evidence synthesis. Emerging evidence suggests that large language models (LLMs) can automate this task, but their performance when processing multiple references simultaneously in "batches" is uncertain. Objectives: To evaluate the classification performance of four state-of-the-art LLMs (Gemini 2.5 Pro, Gemini 2.5 Flash, GPT-5, and GPT-5 mini) in predicting reference eligibility across a wide range of batch sizes for a systematic review of randomized controlled trials. Methods: We used a gold-standard dataset of 790 references (93 considered relevant) from a published Cochrane Review on stem cell treatment for acute myocardial infarction. Using the public APIs for each model, batches of 1 to 790 references were submitted to classify each as "Include" or "Exclude." Performance was assessed using sensitivity and specificity, with internal validation conducted through 10 repeated runs for each model-batch combination. Results: Gemini 2.5 Pro was the most robust model, successfully processing the full 790-reference batch. In contrast, GPT-5 failed at batches ≥400, while GPT-5 mini and Gemini 2.5 Flash failed at the 790-reference batch. Overall, all models demonstrated strong performance within their operational ranges, with two notable exceptions: Gemini 2.5 Flash showed low initial sensitivity at batch 1, and GPT-5 mini's sensitivity degraded at higher batch sizes (from 0.88 at batch 200 to 0.48 at batch 400). At a practical batch size of 100, Gemini 2.5 Pro achieved the highest sensitivity (1.00, 95% CI 1.00-1.00), whereas GPT-5 delivered the highest specificity (0.98, 95% CI 0.98-0.98). Conclusion: State-of-the-art LLMs can effectively screen multiple abstracts per prompt, moving beyond inefficient single-reference processing. However, performance is model-dependent, revealing trade-offs between sensitivity and specificity. Therefore, batch size optimization and strategic model selection are important parameters for successful implementation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.