OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 04:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Understanding successful human–AI teaming: The role of goal alignment and AI autonomy for social perception of LLM-based chatbots

2025·0 Zitationen·Computers in Human Behavior Artificial HumansOpen Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2025

Jahr

Abstract

LLM-based chatbots such as ChatGPT support collaborative, complex tasks by leveraging natural language processing to provide skills, knowledge, or resources beyond the user’s immediate capabilities. Joint activity theory suggests that effective human-AI collaboration, however, requires more than responding to verbatim prompts—it depends on aligning with the user’s underlying goal. Since prompts may not always explicitly state the goal, an effective LLM should analyze the input to approximate the intended objective before autonomously tailoring its response to align with the user’s goal. To test these assumptions, we examined the effects of LLM-based chatbots’ autonomy and goal alignment on multiple social perception metrics as key criteria for successful human-AI teaming (i.e., perceived cooperation, warmth, competence, traceability, usefulness, and trustworthiness). We conducted a scenario-based online experiment where participants ( N = 182, within-subjects design) were instructed to collaborate with four different versions of an LLM-based chatbot. The overall goal of the study scenario was to detect and correct erroneous information in short encyclopedic articles, representing a prototypical knowledge work task. Four custom-instructed chatbots were provided in random order: three chatbots varying in goal alignment and AI autonomy and one chatbot serving as a control condition not fulfilling user prompts. Repeated-measures ANOVAs demonstrate that a chatbot which is able to excel in goal alignment by autonomously going beyond verbatim user prompts is perceived as superior compared to a chatbot that adheres rigidly to user prompts without adapting to implicit objectives and chatbots that fail to meet the explicit or implicit user goal. These results support the notion that AI autonomy is only perceived as beneficial as long as user goals are not undermined by the chatbot, emphasizing the importance of balancing user and AI autonomy in human-centered design of AI systems. • 182 online participants evaluated LLM-based chatbot versions varying in AI autonomy & goal alignment. • High AI autonomy combined with goal alignment led to superior user experience. • Goal alignment more decisive for warmth and teaming perception than AI autonomy.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

AI in Service InteractionsArtificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen