Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When does the “assistant” heuristic work? Examining the effect of AI job titles in tasks with varying criticalities on the use of conversational AI-based services
1
Zitationen
1
Autoren
2025
Jahr
Abstract
Recent marketing trends involve companies using low-status job titles, such as "assistant" (e.g., Google Home Assistant), to label conversational AI agents. This strategy aims to activate an altruistic "assistant" heuristic and enhance users' willingness to use these AI agents. However, this paper—comprising one pretest (N=313), three experiments (N=307, N=300, N=308), and one partial least squares structural equation modeling (PLS-SEM) analysis (N=309)—demonstrates that the effect of this strategy on willingness to use is positive only when the task criticality is high. When the task criticality is not high, higher-hierarchy AI titles (e.g., "manager," "teacher," "analyst") generate greater willingness to use. The research examines three alternative serial mediation pathways—perceived warmth, perceived control, and perceived risks—to test for competing explanations alongside the focal serial mediation through perceived humanlikeness and competence. Across the four studies, the serial mediation via perceived humanlikeness and competence remained robust, even when controlling for alternative pathways and scenario realism (Study 3). The final model indicates that when task criticality is not high, increased perceptions of hierarchical status in conversational AI settings enhance perceived humanlikeness. This, in turn, boosts perceived competence, ultimately increasing users' willingness to use the AI. However, when task criticality is high, the effect reverses—higher-status AI is perceived as less humanlike and less competent, reducing users' willingness to engage with it.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.633 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.580 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.551 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.422 Zit.