Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.
What Do People See in Large Language Models’ Social Behavior? Exploring individuals’ Reactions to LLM-LLM Interactions and Their Impact on Technology Perceptions
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in generating human-like text. As this technology becomes more widespread, LLMs will likely interact not only with humans but also with other LLMs. However, while prior research has shown that robot-robot communication can influence how people perceive the artificial agents, studies on whether observing LLM-LLM interactions can affect users’ perceptions are still lacking. This study examines how individuals perceive communication between LLMs (ChatGPT-3.5 vs. ChatGPT-4.0) through seventeen in-depth interviews. The findings reveal that when participants see LLM interactions as cohesive, they may perceive these interactions as a form of human-like collaboration. Moreover, this perception may lead participants to anthropomorphize the LLM further, attributing to it human-like qualities, such as proactivity and emotions, as underlying causes of the observed collaborative behavior. Ultimately, perceiving the model as a “being” endowed with empathy improves the participants’ attitudes toward the technology.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.496 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.386 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.848 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.562 Zit.