Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
MedEvalarena: A Self-Generated, Peer-Judged Benchmark for Medical Reasoning
0
Zitationen
7
Autoren
2026
Jahr
Abstract
ABSTRACT Large Language Models (LLMs) demonstrate strong performance at medical specialty board multiple-choice question (MCQ) answering, however, underperform in more complex medical reasoning scenarios. This gap indicates a need for improving both LLM medical reasoning and evaluation paradigms. We introduce MedEvalArena, a framework in which LLMs engage in a symmetric round-robin format. Each model generates challenging board-style medical MCQs, then serves in an ensemble LLM-as-judge bench to adjudicate validity of generated questions, and finally completes the validated exam as an examinee. We compared performance of leading LLMs across the OpenAI, Grok, Gemini, Claude, Kimi, and DeepSeek families on both question generation validity and exam-taking performance. Across frontier models, we observe no statistically significant differences in exam-taking performance, suggesting convergence in current medical reasoning ability across frontier LLMs for question-answering. We observed higher question validity rates in questions generated by OpenAI, Gemini, and Claude frontier models compared to Kimi, Grok, and DeepSeek models. When jointly considering accuracy and inference cost, multiple frontier models lie on the Pareto frontier with no single model dominating across both dimensions. MedEvalArena provides a dynamic and scalable framework for benchmarking LLM medical reasoning.