OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 16:40

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Calibration of AI large language models with human subject matter experts for grading of clinical short-answer responses in dental education

2026·1 Zitationen·BMC Oral HealthOpen Access
Volltext beim Verlag öffnen

1

Zitationen

6

Autoren

2026

Jahr

Abstract

The automated grading of clinical short-answer questions using large language models (LLMs) could alleviate faculty workload and improve the immediacy of feedback in dental education. However, evidence on the capacity of LLMs for rubric-based grading in dentistry remains limited. Therefore, this study aimed to compare the grading reliability and error patterns of two LLMs, ChatGPT-4 and the open-weight DeepSeek-3, against expert human evaluators. In a retrospective cross-sectional study with comparative validation design, we analyzed 2,358 short-answer responses from 262 undergraduate dental students (across nine clinical questions). All responses were analyzed, then human-graded by three calibrated subject-matter experts (SME) (intraclass correlation coefficient [ICC] = 0.84) to provide a reference. Each LLM was provided a 12-point analytic rubric to guide the grading, but was not provided any prior examples of the grading task (i.e., a zero-shot prompt). We assessed agreement using ICC, Pearson correlation, Cohen’s kappa, and mixed-effects models, and examined error tiers (≤ 1, 2–3, > 3 points) across Bloom’s levels and response styles. In this dataset, DeepSeek-3 obtained an ICC of 0.87 compared with ChatGPT-4 which obtained an ICC of 0.64. DeepSeek-3 matched exactly with human scores in 43.3% of cases and was within ± 1 point in 62.4%, compared with 35.5% and 44.1% for ChatGPT-4. High-error rates (> 3 points) were 7.5% for DeepSeek-3 vs. 26.9% for ChatGPT-4 (χ², p < 0.01). DeepSeek-3’s agreement was consistent across cognitive levels and response verbosity, while ChatGPT-4’s accuracy on higher-level and verbose responses was significantly lower (p < 0.01). Both models exhibited an optimistic bias by over-scoring incorrect answers. DeepSeek-3 showed fewer large-magnitude errors and better agreement with human graders compared to ChatGPT-4, suggesting its potential value for large-scale AI-assisted assessment for dental education. Since both models can over-score on incorrect results, human-in-the-loop oversight is necessary for high-stakes applications. Further work should evaluate performance across more courses, institutions, and languages, as well as examine the effects of model calibration, bias reduction, and external validation before considering the broader integration of LLMs into dental education.

Ähnliche Arbeiten