Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
MedPromptEval: A Comprehensive Framework for Systematic Evaluation of Clinical Question Answering Systems
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Clinical deployment of large language models (LLMs) faces critical challenges, including inconsistent prompt performance, variable model behavior, and a lack of standardized evaluation methodologies. We present MedPromptEval, a framework that systematically evaluates LLM-prompt combinations across clinically relevant dimensions. This framework automatically generates diverse prompt types, orchestrates response generation across multiple LLMs, and quantifies performance through multiple metrics measuring factual accuracy, semantic relevance, entailment consistency, and linguistic appropriateness. We demonstrate MedPromptEval's utility across publicly available clinical question answering (QA) datasets - MedQuAD, PubMedQA, and HealthCareMagic - in distinct evaluation modes: 1) model comparison using standardized prompts; 2) prompt strategy optimization using a controlled model; and 3) extensive assessment of prompt-model configurations. By enabling reproducible benchmarking of clinical LLM and QA applications, MedPromptEval provides insights for optimizing prompt engineering and model selection, advancing the reliable and effective integration of language models in health care settings.