Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Guidance for manuscript submissions testing the use of generative AI for systematic review and meta-analysis
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Machine learning (ML) and generative artificial intelligence (GenAI) have great potential to improve key stages of systematic reviews and meta-analyses such as searching, screening (title/abstract and fulltext), and data extraction.Research Synthesis Methods welcomes manuscripts that evaluate ML and GenAI methods across different stages of the systematic review and meta-analysis (SRMA) process.This guidance outlines requirements for manuscripts that evaluate the performance of ML or GenAI methods in SRMA, detailing expectations for the reporting of the methodology, validation, and results of these evaluations.Note that GenAI models differ from ML models in that the outputs of GenAI models can vary depending on the prompt, the model version, and random chance.Thus, evaluating the use cases of GenAI models in stages of SRMA requires particular attention.This guide adopts the principles itemized in other ongoing efforts, such as Responsible AI in Evidence Synthesis 1 guidance and recommendations, and Digital Evidence Synthesis Tools 2 to ensure responsible and transparent use of AI in SRMA methodologies. Research design and methodAuthors should clearly describe their research design.The experimental design must demonstrate a robust methodology that allows for replication and validation across different SRMA contexts.Thus, authors should detail (where applicable): (1) sampling methodology and dataset characteristics (specify whether the sample includes all studies or only a subset, with explicit reporting of both the number of all studies and subsets); (2) variables under consideration; (3) preprocessing methods; (4) clearly defined research questions; (5) heterogeneity considerations; (6) prompts (specify how prompt was developed and tested relative to the current study); and (7) methodological justification, including appropriate use cases, limitations, and scenarios when it may be inappropriate to use.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.