OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 11:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

One shot at trust: building credible evidence for medical artificial intelligence

2025·1 Zitationen·The Lancet Digital HealthOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2025

Jahr

Abstract

One shot at trust: building credible evidence for medical artificial intelligenceThe landscape of medical artificial intelligence (AI) is experiencing unprecedented momentum.Since January, 2023, the number of publications on this subject has remarkably increased, with each new use case for large language models (LLMs) generating cascading enthusiasm through academic journals, social media, and mainstream press. 1 Although this fervour reflects genuine technological advances, the rapid pace of development and deployment also demands thoughtful consideration of how these findings are being evaluated and communicated.Given that the outcomes in the field of medicine directly affect human lives, the stakes demand particular vigilance.LLM research is rapidly permeating clinical practice, from documentation to diagnostic support.The key challenge is not managing the development and deployment pace but rather ensuring alignment between reported capabilities and real-world value.An increasing disconnect between technological promises and meaningful outcomes risks creating a trust deficit that could undermine sustainable adoption and integration into clinical practice.The evaluation of LLMs in medicine demands a rigorous preimplementation research methodology that parallels the established paradigms of clinical investigation. 2 Preclinical and simulation studies are an important step in the clinical-translational pathway, and investigators who engage meaningfully in this work deserve applause.However, a trend of overstated conclusions and representations regarding the level of evidence that such studies provide to the clinical community is a cause of concern.This pattern becomes particularly concerning when such studies gain rapid traction in high-impact journals and subsequent media coverage, potentially distorting the actual state of evidence.Compounding this risk is a troubling trend towards imprecise and overstated communication in medical AI research findings.Preliminary or simulation-based results have often been described using terms such as randomised controlled trials or randomised clinical trials. [3][4]4][5] Similar terms have been used for medical education simulations, although these are more appropriately described as an examination of their effect on medical students explicitly in simulation settings. 6owever, when such trials are synthetic and their methods bear little resemblance to real-world clinical

Ähnliche Arbeiten