Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mitigating Automation Bias in Generative AI Through Nudges: A Cognitive Reflection Test Study
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Generative Artificial Intelligence (AI), typified by large language models (LLMs), can significantly augment human decision-making across diverse domains. However, it also introduces a potential pitfall: automation bias, whereby users over-rely on AI-generated outputs, failing to sufficiently question or verify them. This paper presents a quantitative experiment employing the Cognitive Reflection Test (CRT) to measure whether users critically reflect on AI-generated responses. In an online study with three conditions (no support, faulty AI support, and faulty AI support plus a warning nudge), we assess both the existence of automation bias and whether nudging can help mitigate this effect. Results indicate that participants who received faulty AI support performed significantly worse on CRT questions, answering fewer than half as many CRT items correctly compared to the control group without any support. This effect shows that users often uncritically accept AI outputs. Yet, embedding a warning nudge into the user interface alleviated the effect and almost doubled user performance compared to the purely faulty AI support condition. However, the nudge did not elevate performance above the no-support control group. Additional analyses found that user “AI literacy” (i.e., user-reported knowledge and experience with AI) did not significantly prevent automation bias. Overall, our findings stress the importance of designing AI-based systems more responsibly to reduce over-reliance on AI outputs. They further suggest that simple interface nudges can strengthen users’ critical reflection in collaboration with generative AI systems.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.632 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.549 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.548 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.306 Zit.