Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Error Imaginations: How German Lay Users Negotiate Risks of Generative AI Errors in Political Information Searches
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Generative AI (GenAI) is increasingly used as an information source across various domains, including politics. However, GenAI systems frequently produce errors, which creates particular challenges in sensitive contexts such as political information. Existing research on GenAI errors primarily focuses on model safety evaluations or the broader societal risks of AI-generated misinformation. Yet little attention has been paid to how lay users interpret and respond to GenAI errors in political contexts. This study argues that lay users imagine how and why errors occur, making GenAI errors negotiated, speculated on, and mythologized phenomena. Building on the concept of $2 , I call these ongoing negotiations $2 , conceptualized as means through which lay users subjectively anticipate risks and manage AI's fundamental opacity. This qualitative pilot study comprised two focus groups (N=24) conducted in September 2025 with German part-time university students from interdisciplinary backgrounds. Using a scenario-based vignette design, participants engaged with deliberately erroneous mock GenAI responses to political queries classified by the error taxonomy: factual errors (nonsensical) and evasion (refusal). Preliminary Reflexive Thematic Analysis revealed a striking paradox: most participants had no direct experience using GenAI for political information, yet articulated vivid error imaginations resulting in the risk mitigation practice of anticipatory non-use. While participants expressed concerns about manipulation and super-human persuasive power, they simultaneously reproduced industry framings that naturalize GenAI errors as technical limitations. Paradoxically, anticipatory non-use creates a self-reinforcing cycle where $2 remain uncorrected by experience, leaving users dependent on external narratives rather than developing situated algorithmic epistemic vigilance.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.650 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.878 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.479 Zit.
Fairness through awareness
2012 · 3.296 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.