OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 10:15

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Automation in Clinical Trial Statistical Programming: A Structured Review of TLF Generation, Validation Frameworks, and AI/ML Integration (2020–2025)

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Abstract Background Clinical trial statistical programming is transitioning from manual, study-specific coding toward metadata-driven, automated pipelines. Although general data management transformation has been reviewed, comprehensive synthesis of statistical programming automation—particularly tables, listings, and figures (TLF) generation and validation frameworks—remains limited. This review addresses this gap through systematic evidence synthesis. Methods We conducted a structured literature review across PubMed, Google Scholar, arXiv, and industry conference proceedings (PharmaSUG, PHUSE, R/Pharma) from January 2020 to December 2025. We applied GRADE methodology to assess evidence quality. From 789 publications screened, 262 met inclusion criteria for synthesis. Results Key findings include: (1) the pharmaverse ecosystem (rtables, Tplyr, admiral) reduced TLF development time by 15–25% (GRADE [Grading of Recommendations, Assessment, Development, and Evaluation]: Low); (2) risk-based validation combined with CI/CD pipelines decreased validation effort by 30–50% (GRADE: Low); (3) metadata-driven architectures enabled 40–60% specification reuse across studies (GRADE: Very Low); (4) REDCap2SDTM reduced SDTM conversion time by 75–85% (GRADE: Moderate); (5) domain-specific large language models (LLMs) achieved 88–93% F1-scores on clinical NLP tasks (GRADE: Moderate), while general-purpose models showed 60–85% accuracy for code generation (GRADE: Very Low). Critical evidence gaps persist: only 12 of 527 validation papers (2.3%) reported quantitative outcomes, and no RCTs comparing validation approaches exist. Conclusions Clinical programming automation has reached practical maturity. However, evidence quality remains predominantly Low to Very Low. Future priorities include RCTs comparing validation approaches, standardized outcome metrics, and regulatory guidance for AI-assisted programming.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationElectronic Health Records SystemsScientific Computing and Data Management
Volltext beim Verlag öffnen