OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 05.04.2026, 21:16

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Developing and evaluating multimodal large language model for orthopantomography analysis to support clinical dentistry

2026·0 Zitationen·Cell Reports MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2026

Jahr

Abstract

Orthopantomography (OPG) is a primary screening tool for initial dental diagnosis, yet existing AI support typically operates in a one-way manner, lacking interactive nature like ChatGPT. To address this, we introduce ToothXpert, a dental diagnostic system with multimodal large language model. We curated a dental concept alignment dataset and a comprehensive multimodal OPG dataset, MM-OPG (comprising 131,065 question-answer [QA] pairs), covering 11 key conditions. ToothXpert facilitates simultaneous visual and language responses, fostering dynamic dentistry support. On the internal test dataset with 4,950 QA pairs, ToothXpert achieves a macro F1 score of 78.61%, outperforming LLaVA v.1.5 by 23.19%, LLaVA-Med v.1.5 by 41.39%, Qwen-VL by 56.38%, and HuatuoGPT-Vision by 26.74%. In human assessment, ToothXpert achieves an averaged performance of 3.54, surpassing compared multimodal large language models (MLLMs) with 1.46 and 1.38. Moreover, on the external dataset, ToothXpert achieves 1.96% and 2.99% higher F1 scores than two junior dentists with 3-years' clinical experience, while requiring significantly less time.

Ähnliche Arbeiten