Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating the use of Artificial Intelligence (AI) in Systematic Review Abstract Screening: A Comparative Study of AI-aided Tools
0
Zitationen
8
Autoren
2026
Jahr
Abstract
<title>Abstract</title> Background: Manual abstract screening in systematic reviews is a time-consuming and labour-intensive task. With the rise of artificial intelligence (AI), the number of published articles has grown substantially, adding to the workload of review studies that rely on robust and timely evidence synthesis. At the same time, AI-aided screening tools have been developed to accelerate this process. While previous studies have demonstrated the efficiency of such tools, ongoing technological advances necessitate updated evaluations, particularly for tools that are freely accessible. In review types such as umbrella reviews, where both the topic area and study design are central to eligibility decisions, the performance of AI-aided tools remains underexplored. Methods: We conducted a comparative evaluation of six freely available AI-aided abstract screening tools: Rayyan, RobotAnalyst, PICO Portal, Abstrackr, ASReview, and Colandr using a previously completed umbrella review of interdisciplinary urban planning and public health studies. We assessed (1) early recall performance (i.e., identification of included studies within the first 25% of screening), (2) feature availability and depth, and (3) user experience. This Study WIthin a Review (SWAR) was registered in the SWAR repository as SWAR 25. Results: All evaluated tools supported the review process by facilitating screening and offering features such as prioritization and keyword highlighting. However, none identified more than 50% of the previously included studies within the first 25% of screening. Feature analysis and user feedback suggested that Rayyan and PICO Portal provided the most useful functionality for our interdisciplinary umbrella review context, although limitations were noted in duplicate removal and in recognising the importance of study design in eligibility decisions. Conclusions: Although a growing number of AI-assisted abstract screening tools are publicly and freely available, their accuracy, usability, and adaptability to different review designs remain limited. Enhanced support for duplicate detection and integration of study design considerations could improve their utility in umbrella reviews and other complex evidence syntheses. Continued evaluation and user training may support broader adoption across diverse research contexts.
Ähnliche Arbeiten
UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age
2015 · 12.730 Zit.
SEER Cancer Statistics Review, 1975-2003
2006 · 11.474 Zit.
NIA‐AA Research Framework: Toward a biological definition of Alzheimer's disease
2018 · 9.903 Zit.
Global burden of 87 risk factors in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019
2020 · 9.199 Zit.
Mild Cognitive Impairment
1999 · 8.882 Zit.