OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 16:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Utility of Generative Artificial Intelligence for Japanese Medical Interview Training: A Randomized Crossover Pilot Study (Preprint)

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> The medical interview remains a cornerstone of clinical training. There is growing interest in applying generative artificial intelligence (AI) in medical education, including medical interview training. However, its utility in culturally and linguistically specific contexts, including Japanese, remains underexplored. This study investigated the utility of generative AI for Japanese medical interview training. </sec> <sec> <title>OBJECTIVE</title> This pilot study aimed to evaluate the utility of generative AI as a tool for medical interviews training by comparing its performance with that of traditional face-to-face training methods using a simulated patient. </sec> <sec> <title>METHODS</title> We conducted a randomized crossover pilot study involving 20 postgraduate year 1-2 physicians from a university hospital. Participants were randomly allocated into two groups. Group A began with an AI-based station on a case involving abdominal pain, followed by a traditional station with a standardized patient presenting chest pain. Group B followed the reverse order, starting with the traditional station for abdominal pain, and subsequently within AI-based station for the chest pain scenario. In the AI-based stations, participants interacted with a GPTs-configured platform that simulated patient behaviors. GPTs are customizable versions of ChatGPT adapted for specific purposes. The traditional stations involved face-to-face interviews with a simulated patient. Both groups used identical, standardized case scenarios to ensure uniformity. Two independent evaluators, blinded to the study conditions, assessed participants' performances using six defined metrics: patient care and communication, history taking, physical examination, accuracy and clarity of transcription, clinical reasoning, and patient management. A 6-point Likert scale was employed for scoring. Discrepancy between the evaluators resolved through discussion. To ensure cultural and linguistic authenticity, all interviews and evaluations were conducted in Japanese. </sec> <sec> <title>RESULTS</title> AI-based stations scored lower across most categories, particularly in patient care and communication, than traditional stations (4.48 vs. 4.95, P=.009). However, AI-based stations demonstrated comparable performance in clinical reasoning, with a non-significant difference (4.43 vs. 4.85, P=.10). </sec> <sec> <title>CONCLUSIONS</title> The comparable performance of generative AI in clinical reasoning highlights its potential as a complementary tool in medical interview training. One of its main advantages lies in enabling self-learning, allowing trainees to independently practice interviews without the need for simulated patients. Nonetheless, the lower scores in patient care and communication underline the importance of maintaining traditional methods that capture the nuances of human interaction. These findings support the adoption of hybrid training models that combine generative AI with conventional approaches to enhance the overall effectiveness of medical interview training in Japan. </sec> <sec> <title>CLINICALTRIAL</title> UMIN-CTR UMIN000053747; https://center6.umin.ac.jp/cgi-open-bin/ctr_e/ctr_view.cgi?recptno=R000061336. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationRadiology practices and educationClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen