Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How do people react to ChatGPT's unpredictable behavior? Anthropomorphism, uncanniness, and fear of AI: A qualitative study on individuals’ perceptions and understandings of LLMs’ nonsensical hallucinations
24
Zitationen
3
Autoren
2025
Jahr
Abstract
• We conducted a qualitative study on how people perceive LLMs’ unpredictable behaviors • We interviewed 20 participants to gather their feedback on a hallucination dialogue • We found that unpredictable behaviors change how people experience ChatGPT • We show that these behaviors evoke unsettling emotions and fear of AI Large Language Models (LLMs) have shown impressive capabilities in producing texts of quality and fluency that are similar to those created by humans. Despite their increasing use, however, the broader population's experience of many aspects of interaction with LLMs remains underexplored. This study investigates how diverse individuals perceive and account for “nonsensical hallucinations”, namely, an LLM's unpredictable and meaningless behavior provided as a response to a user's request. We asked 20 participants to interact with ChatGPT 3.5 and experience its hallucinations. Through semi-structured interviews, we found that participants with a computer science background or consistent previous use of LLMs interpret unpredictable nonsensical responses as an error, while novices perceive them as model's autonomous behaviors. Moreover, we discovered that such responses produce an abrupt modification of participants’ perceptions and understandings of the LLM's nature. From a soothing and polite entity, ChatGPT becomes either an obscure and unfamiliar “alien”, or a human-like being potentially hostile to humankind, making also emerge unsettling feelings, which may unveil an underlying fear of Artificial Intelligence. The study contributes to literature on how people react to the unfamiliarity of a technology that may be perceived as alien and yet extremely human-like, generating “uncanny effects,” as well as to research on the anthropomorphizing of technology.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.527 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.419 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.909 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.578 Zit.