OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.04.2026, 02:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.

What Do People See in Large Language Models’ Social Behavior? Exploring individuals’ Reactions to LLM-LLM Interactions and Their Impact on Technology Perceptions

2026·0 Zitationen·Open MINDOpen Access

0

Zitationen

3

Autoren

2026

Jahr

Abstract

Large Language Models (LLMs) have demonstrated remarkable capabilities in generating human-like text. As this technology becomes more widespread, LLMs will likely interact not only with humans but also with other LLMs. However, while prior research has shown that robot-robot communication can influence how people perceive the artificial agents, studies on whether observing LLM-LLM interactions can affect users’ perceptions are still lacking. This study examines how individuals perceive communication between LLMs (ChatGPT-3.5 vs. ChatGPT-4.0) through seventeen in-depth interviews. The findings reveal that when participants see LLM interactions as cohesive, they may perceive these interactions as a form of human-like collaboration. Moreover, this perception may lead participants to anthropomorphize the LLM further, attributing to it human-like qualities, such as proactivity and emotions, as underlying causes of the observed collaborative behavior. Ultimately, perceiving the model as a “being” endowed with empathy improves the participants’ attitudes toward the technology.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsComputational and Text Analysis Methods