OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 04:28

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Leveraging Self-Refinement in Large Language Models to Suppress Excessive Responses for Virtual Simulated Patients

2026·0 Zitationen·IEICE Transactions on Information and SystemsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Medical interviews are a core component of medical education, forming an essential part of the clinical skill that students must acquire. Simulated patients (SPs), who replicate the behavior of real patients, play a crucial role in the learning and examination of medical interviews. However, arranging for competent SPs who consistently perform according to detailed instructions and scenarios remains both labor-intensive and costly. To address this issue, many studies have proposed virtual simulated patients (VSPs) utilizing large language models (LLMs). Conventional VSPs, however, often generate undesirable excessive responses. Moreover, these methods typically rely on cloud-based LLMs, which raises significant concerns about data leakage and operational costs. To overcome these challenges, this study proposes an approach to develop VSPs equipped with mechanisms to suppress excessive responses by utilizing an open-source LLM. Our core mechanisms are self-refinement, based on an artificial intelligence agent approach, and question category-aware answering, which aligns the VSP's responses with the appropriate granularity required for the category of the given questions. Through experiments with actual data, we demonstrate that the proposed method significantly reduces excessive responses.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingArtificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen