OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.05.2026, 04:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Relational Amplifier: How Anthropomorphism of Generative AI Backfires for Distressed Users

2026·0 Zitationen·PsyArXiv (OSF Preprints)Open Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

General-purpose Generative Artificial Intelligence (GenAI) is increasingly utilized as an unregulated source of mental health support, yet the psychological dynamics of users' interactions with these agents, and the associated risks, remain underexplored. Integrating social-cognitive models of anthropomorphism with attachment theory via the proposed Relational Amplifier framework, this study addressed two primary objectives: (1) to identify the unique predictors driving GenAI adoption for mental health support, and (2) to examine how psychological distress moderates the impact of anthropomorphism on Projected AI-anxiety. A representative sample of 584 adults (using a pre-registered data collection protocol) completed the DASS-21 (assessing psychological distress), the AIPAS (measuring anthropomorphism), and a Projected AI-Attachment scale adapted from the ECR-RS. Psychometric analysis (CFA) confirmed the validity of this new construct and its distinction from general attachment orientations. Logistic regression confirmed a three-way interaction hypothesis: the likelihood of utilizing GenAI for support was not driven by distress alone, but by a specific constellation of high distress, high anthropomorphism, and elevated Projected AI-anxiety. Moderation analysis revealed a critical backfire effect: while anthropomorphism reduced Projected AI-anxiety in low-distress users, it amplified it in highly distressed users. The results indicate that for vulnerable individuals, humanizing the agent does not provide genuine emotional security but rather serves as a screen for projecting internal insecurities. Ultimately, these findings suggest that anthropomorphic design of AI agents may increase engagement but also exacerbate anxiety for vulnerable users, challenging the assumption that maximizing human-likeness necessarily benefits mental health support.

Ähnliche Arbeiten

Autoren

Themen

Digital Mental Health InterventionsArtificial Intelligence in Healthcare and EducationMental Health via Writing
Volltext beim Verlag öffnen