Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Med-RewardBench: Benchmarking Reward Models and Judges for Medical Multimodal Large Language Models
0
Zitationen
9
Autoren
2025
Jahr
Abstract
Multimodal large language models (MLLMs) hold significant potential in medical applications, including disease diagnosis and clinical decision-making. However, these tasks require highly accurate, context-sensitive, and professionally aligned responses, making reliable reward models and judges critical. Despite their importance, medical reward models (MRMs) and judges remain underexplored, with no dedicated benchmarks addressing clinical requirements. Existing benchmarks focus on general MLLM capabilities or evaluate models as solvers, neglecting essential evaluation dimensions like diagnostic accuracy and clinical relevance. To address this, we introduce Med-RewardBench, the first benchmark specifically designed to evaluate MRMs and judges in medical scenarios. Med-RewardBench features a multimodal dataset spanning 13 organ systems and 8 clinical departments, with 1,026 expert-annotated cases. A rigorous three-step process ensures high-quality evaluation data across six clinically critical dimensions. We evaluate 32 state-of-the-art MLLMs, including open-source, proprietary, and medical-specific models, revealing substantial challenges in aligning outputs with expert judgment. Additionally, we develop baseline models that demonstrate substantial performance improvements through fine-tuning.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.366 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.716 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.254 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.678 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.430 Zit.