Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bibliometric, methodological and reporting characteristics of systematic reviews with explicit AI disclosure statements: an exploratory meta-research study
0
Zitationen
6
Autoren
2026
Jahr
Abstract
The exponential increase in systematic reviews (SRs), accelerated by LLM-based generative AI and non-LLM automation tools, risks redundancy, overlap, and research waste. However, there is limited empirical evidence on how SRs that disclose AI use apply and report these tools in practice, including the extent of transparency and validation. To assess the methodological and reporting features of SRs that explicitly acknowledge LLM-based and non-LLM automation tools’ use in a dedicated statement, and to examine how these features relate to bibliometric characteristics of these SRs. An exploratory, cross-sectional, meta-research study with individual SRs as the unit of analysis. A random sample was drawn from a purposively defined stratum, comprising only SRs with designated AI statements. Screening was conducted by a single researcher; data extraction was performed by one researcher and independently verified by four others. Descriptive analyses were supplemented by Wilcoxon rank-sum tests, Spearman’s ρ, and χ² tests. We included 188 SRs; 75% reported using LLMs, and in 92% of studies LLM-based and non-LLM automation tools were used for manuscript writing. Reviews with designated AI statements were predominantly published in Elsevier or Elsevier-supported journals (70.2%). Only 42% referenced a pre-registered protocol; the median time from protocol registration to first journal submission was 267 days. Reviews with more included studies were published in higher-impact journals (ρ = 0.34, p < 0.0001), as were reviews led by authors affiliated with high-income countries (W = 1931.5, p < 0.0001). Reviews with more authors were more likely to have a pre-registered protocol (χ² = 20.54, p < 0.0001), and pre-registered reviews more often adhered to a reporting checklist (χ² = 8.93, p = 0.0027). LLM-based and non-LLM automation tools were used predominantly for writing. Sharing of prompts and human-validation procedures was insufficient, and many reviews exhibited methodological and reporting weaknesses. Clearer guidance is needed to support transparent, rigorous use of LLM-based and non-LLM automation tools in SRs. Early experience with LLM-based and non-LLM automation tools, as reflected in dedicated AI statement sections, indicates that in most cases these tools were reportedly used solely for proofreading and writing assistance, predominantly via ChatGPT. These SRs were more likely to be published by Elsevier or Elsevier-supported journals. Reviews led by authors affiliated with institutions in high-income countries and including more studies tended to be published in higher-impact journals. Reviews with more authors were more likely to have a pre-registered protocol, and those with a pre-registered protocol were more likely to have adhered to a reporting checklist. Approximately 40% of the included SRs that have conformed to a checklist, either cited outdated versions, misused them or provided insufficient documentation for adherence to a reporting checklist.
Ähnliche Arbeiten
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
2021 · 85.575 Zit.
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement
2009 · 82.820 Zit.
The Measurement of Observer Agreement for Categorical Data
1977 · 77.011 Zit.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement
2009 · 62.852 Zit.
Measuring inconsistency in meta-analyses
2003 · 61.558 Zit.