OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 07:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating the Methodological Quality of Artificial Intelligence–Assisted Systematic Reviews: Protocol for a Mixed Methods Meta-Research Study (Preprint)

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> Artificial intelligence (AI), including large language models (LLMs), is increasingly integrated into systematic review (SR) workflows. AI tools may accelerate searching, screening, data extraction, and reporting, but their effects on methodological quality, reporting completeness, transparency, and reproducibility remain uncertain. Existing evaluations largely examine isolated tasks, and inconsistent disclosure of AI use limits reproducibility and oversight. </sec> <sec> <title>OBJECTIVE</title> This four-phase mixed-methods meta-research study will: (1) compare the methodological quality of AI-assisted versus traditional SRs; (2) refine, finalize, and apply a preliminary AI Transparency and Disclosure Index (AITDI); (3) evaluate reproducibility by comparing outputs across repeated runs of the same AI model, across different AI models, and between AI models and human reviewers at multiple SR stages; and (4) explore knowledge user perspectives on rigor, transparency, and trust in AI-assisted SR. </sec> <sec> <title>METHODS</title> We will conduct a matched cohort analysis of SRs published from 2023–2025 in biomedical journals. Each AI-assisted SR will be matched 1:2 with traditional SRs by publication year, clinical domain, review type, and meta-analysis status. Two independent reviewers will apply AMSTAR-2 (methodological quality), PRISMA 2020 (reporting completeness), and, when applicable, ROBIS (risk-of-bias rigor). A preliminary AITDI will be refined and then applied to all AI-assisted SRs. Reproducibility will be assessed using SR-derived tasksets to compare outputs across repeated runs of the same model, across different models, and between AI and human reviewers at key SR stages. Semi-structured interviews with authors, editors, clinicians, policymakers, and patient partners will be analyzed using reflexive thematic analysis. </sec> <sec> <title>RESULTS</title> As of December 2025, the study has been preregistered on OSF (DOI: 10.17605/OSF.IO/Q5JRW), the search strategy has been finalized, and title/abstract screening has begun. Data extraction is planned for March–May 2026, followed by AITDI refinement and reproducibility testing from May–October 2026. Qualitative interviews are anticipated from October 2026–February 2027, with final analyses by April 2027 and dissemination planned for mid-2027. </sec> <sec> <title>CONCLUSIONS</title> This study will provide one of the first empirical comparisons of methodological quality, transparency, and reproducibility of AI-assisted versus traditional SRs in the LLM era. Findings will inform expectations for responsible AI integration and support refinement of reporting and methodological best practices, including future development of AI-specific reporting and appraisal extensions (e.g., PRISMA-LLM, AMSTAR-LLM). </sec> <sec> <title>CLINICALTRIAL</title> N/A </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationMeta-analysis and systematic reviewsScientific Computing and Data Management
Volltext beim Verlag öffnen