Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Performance of Large Language Models on Cognitive Aptitude Testing: A Multi-Run Evaluation on the German Medical School Admission Test (TMS)
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Current LLMs show limited and domain-dependent performance on cognitive aptitude tasks relevant to medical school admission. High accuracy on knowledge-based examinations does not translate into stable performance on aptitude tests emphasizing fluid intelligence. The observed modality-dependent performance patterns and inter-run variability highlight the importance of differentiated, multi-run evaluation strategies when assessing LLMs for applications in medical education.
Ähnliche Arbeiten
TRANSFER OF TRAINING: A REVIEW AND DIRECTIONS FOR FUTURE RESEARCH
1988 · 3.196 Zit.
Systematic Review of Depression, Anxiety, and Other Indicators of Psychological Distress Among U.S. and Canadian Medical Students
2006 · 2.849 Zit.
Likert scales: how to (ab)use them
2004 · 2.349 Zit.
Evaluating professional development
1999 · 2.279 Zit.
Implicit bias in healthcare professionals: a systematic review
2017 · 2.272 Zit.