OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 03:11

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

An Evaluation of the Performance of OpenAI-o1 and GPT-4o in the Japanese National Examination for Physical Therapists

2025·5 Zitationen·CureusOpen Access
Volltext beim Verlag öffnen

5

Zitationen

6

Autoren

2025

Jahr

Abstract

Background and objective Recent advancements in large language models (LLMs) have expanded their applications in medical and healthcare settings. LLMs have demonstrated high performance in various national examinations for healthcare professionals. Open Artificial Intelligence Model Version 1 (OpenAI-o1) attained remarkable accuracy in the Japanese National Examination for Medical Practitioners, whereas Generative Pre-trained Transformer Model Version 4 (GPT-4o) has excelled in image-based tasks, thus suggesting a complementary relationship between the two models. However, their performance in the field of physical therapy, particularly in the Japanese National Examination, remains poorly understood. This study aimed to assess the performance of OpenAI-o1 and GPT-4o in the 59th Japanese National Examination for Physical Therapists (JNEPT) in 2024 Methods A total of 168 text-only questions were administered to OpenAI-o1, and 23 image-based questions were given to GPT-4o, in a zero-shot prompting format. Accuracy was evaluated by comparing the model outputs with the official correct answers released by the Ministry of Health, Labor, and Welfare. Two faculty members specializing in the National Examination for Physical Therapists reviewed all generated explanations for accuracy. Results OpenAI-o1 achieved a correctness rate of 97.0% (163/168 questions) and an explanation accuracy of 86.4% (146/168). In contrast, the GPT-4o attained a correctness rate of 56.5% (13/23 questions) and an explanation accuracy of 52.2% (12/23). OpenAI-o1's primary explanatory errors involved outdated or incorrect knowledge (13 questions), overly simplified discussions (six questions), and misinterpretation of question intent (three questions). GPT-4o's most common error type was the misinterpretation of a question's intent due to difficulties in image analysis (eight questions), along with three instances of knowledge-level inaccuracies. Conclusions OpenAI-o1 exhibited high accuracy and solid explanatory quality, indicating strong adaptability to both general and specialized content in physical therapy, and showed potential utility in medical education and remote healthcare support. GPT-4o, while showing enhanced multimodal capabilities compared with previous models, requires further optimization in image-based reasoning and domain-specific training. These findings underscore the promising role of LLMs in healthcare and medical education while highlighting the importance of ongoing refinement to meet the rigorous demands of clinical and educational environments.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen