OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.04.2026, 22:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

When does the “assistant” heuristic work? Examining the effect of AI job titles in tasks with varying criticalities on the use of conversational AI-based services

2025·1 Zitationen·Computers in Human Behavior ReportsOpen Access
Volltext beim Verlag öffnen

1

Zitationen

1

Autoren

2025

Jahr

Abstract

Recent marketing trends involve companies using low-status job titles, such as "assistant" (e.g., Google Home Assistant), to label conversational AI agents. This strategy aims to activate an altruistic "assistant" heuristic and enhance users' willingness to use these AI agents. However, this paper—comprising one pretest (N=313), three experiments (N=307, N=300, N=308), and one partial least squares structural equation modeling (PLS-SEM) analysis (N=309)—demonstrates that the effect of this strategy on willingness to use is positive only when the task criticality is high. When the task criticality is not high, higher-hierarchy AI titles (e.g., "manager," "teacher," "analyst") generate greater willingness to use. The research examines three alternative serial mediation pathways—perceived warmth, perceived control, and perceived risks—to test for competing explanations alongside the focal serial mediation through perceived humanlikeness and competence. Across the four studies, the serial mediation via perceived humanlikeness and competence remained robust, even when controlling for alternative pathways and scenario realism (Study 3). The final model indicates that when task criticality is not high, increased perceptions of hierarchical status in conversational AI settings enhance perceived humanlikeness. This, in turn, boosts perceived competence, ultimately increasing users' willingness to use the AI. However, when task criticality is high, the effect reverses—higher-status AI is perceived as less humanlike and less competent, reducing users' willingness to engage with it.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

AI in Service InteractionsEthics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen