OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 16:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Asking the right questions: Benchmarking large language models in the development of clinical consultation templates

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

18

Autoren

2025

Jahr

Abstract

<ns3:p>This study evaluates the capacity of large language models (LLMs) to generate structured clinical consultation templates for electronic consultation. Using 145 expert-crafted templates developed and routinely used by Stanford’s eConsult team, we assess frontier models—including o3, GPT-4o, Kimi K2, Claude 4 Sonnet, Llama 3 70B, and Gemini 2.5 Pro—for their ability to produce clinically coherent, concise, and prioritized clinical question schemas. Through a multi-agent pipeline combining prompt optimization, semantic autograding, and prioritization analysis, we show that while models like o3 achieve high comprehensiveness (up to 92.2%), they consistently generate excessively long templates and fail to correctly prioritize the most clinically important questions under length constraints. Performance varies across specialties, with significant degradation in narrative-driven fields such as psychiatry and pain medicine. Our findings demonstrate that LLMs can enhance structured clinical information exchange between physicians, while highlighting the need for more robust evaluation methods that capture a model’s ability to prioritize clinically salient information within the time constraints of real-world physician communication. Limitations include reliance on Stanford-specific templates and concordance-based grading, which may not capture all clinically reasonable outputs.</ns3:p> <ns3:p/> <ns3:p>Poster for the Pacific Symposium on Biocomputing 2026</ns3:p>

Ähnliche Arbeiten