OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.05.2026, 07:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial Authority: The Promise and Perils of LLM Judges in Healthcare

2026·1 Zitationen·BioengineeringOpen Access
Volltext beim Verlag öffnen

1

Zitationen

8

Autoren

2026

Jahr

Abstract

BACKGROUND: Large language models (LLMs) are increasingly integrated into clinical documentation, decision support, and patient-facing applications across healthcare, including plastic and reconstructive surgery. Yet, their evaluation remains bottlenecked by costly, time-consuming human review. This has given rise to LLM-as-a-judge, in which LLMs are used to evaluate the outputs of other AI systems. METHODS: This review examines LLM-as-a-judge in healthcare with particular attention to judging architectures, validation strategies, and emerging applications. A narrative review of the literature was conducted, synthesizing LLM judge methodologies as well as judging paradigms, including those applied to clinical documentation, medical question-answering systems, and clinical conversation assessment. RESULTS: Across tasks, LLM judges align most closely with clinicians on objective criteria (e.g., factuality, grammaticality, internal consistency), benefit from structured evaluation and chain-of-thought prompting, and can approach or exceed inter-clinician agreement, but remain limited for subjective or affective judgments and by dataset quality and task specificity. CONCLUSIONS: The literature indicates that LLM judges can enable efficient, standardized evaluation in controlled settings; however, their appropriate role remains supportive rather than substitutive, and their performance may not generalize to complex plastic surgery environments. Their safe use depends on rigorous human oversight and explicit governance structures.

Ähnliche Arbeiten