OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 14:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Social Misattributions in Conversations with Large Language Models

2025·0 Zitationen·Universität Zürich, ZORAOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

We investigate a typology of socially and ethically risky phenomena emerging from the interaction between humans and large language model (LLM)-based conversational systems. As they relate to the way in which humans attribute social identity components, such as social roles, to LLM-based conversational systems, we term these phenomena `social misattributions.' Drawing on foundational works in interactional socio-linguistics, interpersonal pragmatics, and recent debates in the philosophy of technology, we argue that these social misattributions represent higher-order forms of anthropomorphisation of LLM-based conversational systems that are not justified by their technical capabilities and follow from the social context of conversational interactions. We discuss the risks these misattributions pose to human users, including emotional manipulation and unwarranted trust, and propose mitigation strategies. Our recommendations emphasise the importance of fostering social transparency and exploring approaches, such as frictional design, that are currently promoted in the research domain of human-centred artificial intelligence.

Ähnliche Arbeiten

Autoren

Themen

AI in Service InteractionsNeurobiology of Language and BilingualismArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen