OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 04:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

“ChatGPT says no”: agency, trust, and blame in Twitter discourses after the launch of ChatGPT

2024·10 Zitationen·AI and EthicsOpen Access
Volltext beim Verlag öffnen

10

Zitationen

4

Autoren

2024

Jahr

Abstract

ChatGPT, a chatbot using the GPT-n series large language model, has surged in popularity by providing conversation, assistance, and entertainment. This has raised questions about its agency and resulting implications on trust and blame, particularly when concerning its portrayal on social media platforms like Twitter. Understanding trust and blame is crucial for gauging public perception, reliance on, and adoption of AI-driven tools like ChatGPT. To explore ChatGPT's perceived status as an algorithmic social actor and uncover implications for trust and blame through agency and transitivity, we examined 88,058 tweets about ChatGPT, published in a 'hype period' between November 2022 and March 2023, using Corpus Linguistics and Critical Discourse Analysis, underpinned by Social Actor Representation. Notably, ChatGPT was presented in tweets as a social actor on 87% of occasions, using personalisation and agency metaphor to emphasise its role in content creation, information dissemination, and influence. However, a dynamic presentation, oscillating between a creative social actor and an information source, reflected users' uncertainty regarding its capabilities and, thus, blame attribution occurred. On 13% of occasions, ChatGPT was presented passively through backgrounding and exclusion. Here, the emphasis on ChatGPT's role in informing and influencing underscores interactors' reliance on it for information, bearing implications for information dissemination and trust in AI-generated content. Therefore, this study contributes to understanding the perceived social agency of decision-making algorithms and their implications on trust and blame, valuable to AI developers and policymakers and relevant in comprehending and dealing with power dynamics in today's age of AI.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationMisinformation and Its Impacts
Volltext beim Verlag öffnen