Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Do Artificial Intelligence Clients Speak Like Human Clients? Exploring GPT‐4's Content‐Level Performance in Counseling Role‐Play
1
Zitationen
6
Autoren
2025
Jahr
Abstract
ABSTRACT Traditional role‐playing methods in counselor education, such as peer counseling and actor‐based simulations, enhance trainee competencies but have also created ethical dilemmas. These methods are not cost effective and may not capture and address client diversity. With the advancement of artificial intelligence (AI), new opportunities have emerged for enhancing training through AI‐driven simulations. This study explores the use of GPT‐4, a large language model (LLM) with voice capabilities, for client simulation in counseling role‐play practices. We created 12 fictional client scenarios and engaged in simulation sessions with the fictional clients via GPT‐4, then assessed its content‐level performance using qualitative content analysis. Findings indicated that GPT‐4 showed high‐level performance in generating client narratives that include realistic descriptions of presenting concerns, emotional experiences, and culturally relevant descriptions. These results suggest that GPT‐4 may offer a cost‐effective, accessible, and useful training tool for counselors to independently practice and enhance skills.
Ähnliche Arbeiten
Making sense of Cronbach's alpha
2011 · 13.683 Zit.
Technology-Enhanced Simulation for Health Professions Education
2011 · 1.929 Zit.
The future vision of simulation in health care
2004 · 1.848 Zit.
Does Simulation-Based Medical Education With Deliberate Practice Yield Better Results Than Traditional Clinical Education? A Meta-Analytic Comparative Review of the Evidence
2011 · 1.704 Zit.
A critical review of simulation‐based medical education research: 2003–2009
2009 · 1.648 Zit.