OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 23:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The paradox of agency in psychotherapy: How people with mental distress experience support from generative AI chatbots and human therapists

2025·0 Zitationen·BMC PsychiatryOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

The rapid advancement of Large Language Models has sparked heated debate over whether Generative Artificial Intelligence (AI) chatbots can serve as “digital therapists” capable of providing therapeutic support. While much of this discussion focuses on AI’s lack of agency, understood as the absence of mental states, consciousness, autonomy, and intentionality, empirical research on users’ real-world experiences remains limited. This study explores how individuals with mental distress experience support from both generative AI chatbots and human psychotherapy in natural and unguided contexts, with a focus on how perceptions of agency shape therapeutic experiences. By drawing on participants’ dual exposure, the study seeks to contribute to the ongoing debate about “AI therapists” by clarifying the role of agency in therapeutic change. Sixteen adults who had sought mental health support from both human therapists and ChatGPT participated in semi-structured interviews, during which they shared and compared their experiences with each type of interaction. Transcripts were analyzed using reflexive thematic analysis. Three themes captured participants’ perceptions of ChatGPT relative to human therapists: (1) encouraging open and authentic self-disclosure but limiting deep exploration; (2) the myth of relationship: caring, acceptance, and understanding; (3) fostering therapeutic change: the promise and pitfalls of data-driven solutions. We propose a conceptual model that illustrates how differences in agency status between AI chatbots and human therapists shape the distinct ways they support individuals with mental distress, with agency functioning as both a strength and a limitation for therapeutic engagement. Given that agency functions as a double-edged sword in therapeutic interactions, future mental health services should consider integrated care models that combine the non-agential advantages of AI chatbots with the agentic qualities of human therapists. Rather than anthropomorphizing AI chatbots, their non-agential features—such as responsiveness, absence of intentions, objectivity, and disembodiment—should be strategically leveraged to complement specific functions in human-delivered psychotherapy. At the same time, practitioners should maximize the benefits of their agentic qualities while remaining cautious of the risks. The findings should be interpreted with caution as the sample consisted mainly of young, well-educated Chinese participants from a collectivist cultural context, which may limit transferability to other populations, particularly those from individualistic cultures with different mental health literacy levels, stigma patterns, and therapeutic norms. Not applicable.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Digital Mental Health InterventionsMental Health via WritingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen