Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence tools for automating evidence synthesis: A scoping review (Preprint)
0
Zitationen
9
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Rapidly and accurately synthesizing large volumes of evidence is a time and resource-intensive process. Once published, reviews often risk becoming outdated, limiting their usefulness for decision-makers. Recent advancements in artificial intelligence (AI) have enabled researchers to automate various stages of the evidence synthesis process, from literature searching and screening to data extraction. </sec> <sec> <title>OBJECTIVE</title> We aimed to map the current landscape of AI tools used to automate evidence synthesis. </sec> <sec> <title>METHODS</title> Following the JBI methodology for scoping reviews, we searched Ovid MEDLINE, Ovid Embase, Scopus, and Web of Science in February 2025, and conducted a grey literature search in April 2025. We included articles published in any language from January 2021 onwards. Two reviewers independently screened citations using Rayyan, and we extracted data based on study design and key AI-related technical features. </sec> <sec> <title>RESULTS</title> We identified 7,841 unique citations through database searches and 19 additional records through a grey literature search. A total of 222 articles were included in the review. We identified 65 AI tools that automate either specific tasks or the entire evidence synthesis process. More than half of the included studies were published in 2024, reflecting a trend in the use of general-purpose large language models (LLMs) for evidence synthesis. Title and abstract screening, as well as data extraction, were the most studied tasks for automation. </sec> <sec> <title>CONCLUSIONS</title> A broad, evolving suite of AI tools is available to support automation in evidence synthesis, leveraged by increasingly complex AI methods. Optimal tool selection likely will depend on the review topic, researcher priorities, and specific tasks. While these tools offer potential for reducing manual workload, ongoing evaluation to mitigate AI bias, and ensure quality and integrity of reviews, is essential for safeguarding evidence-based decision-making. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.