OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 07:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Critique of impure reason: Unveiling the reasoning behaviour of medical large language models

2025·4 Zitationen·eLifeOpen Access
Volltext beim Verlag öffnen

4

Zitationen

2

Autoren

2025

Jahr

Abstract

Despite the current ubiquity of large language models (LLMs) across the medical domain, there is a surprising lack of studies which address their <i>reasoning behaviour</i>. We emphasise the importance of understanding <i>reasoning behaviour</i> as opposed to high-level prediction accuracies, since it is equivalent to explainable AI (XAI) in this context. In particular, achieving XAI in medical LLMs used in the clinical domain will have a significant impact across the healthcare sector. Therefore, in this work, we adapt the existing concept of <i>reasoning behaviour</i> and articulate its interpretation within the specific context of medical LLMs. We survey and categorise current state-of-the-art approaches for modelling and evaluating <i>reasoning</i> in medical LLMs. Additionally, we propose theoretical frameworks which can empower medical professionals or machine learning engineers to gain insight into the low-level reasoning operations of these previously obscure models. We also outline key open challenges facing the development of <i>large reasoning models</i>. The subsequent increased transparency and trust in medical machine learning models by clinicians as well as patients will accelerate the integration, application as well as further development of medical AI for the healthcare system as a whole.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Machine Learning in Healthcare
Volltext beim Verlag öffnen