Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The artificial intelligence disclosure penalty: Humans persistently devalue AI-generated creative writing.
1
Zitationen
3
Autoren
2026
Jahr
Abstract
Although preliminary evidence suggests that humans often react aversely to artificial intelligence (AI)-generated creative works, we have little understanding of how robust or persistent these reactions may be. In a series of 16 preregistered experiments (<i>N</i> = 27,491), we examine how evaluations of creative writing are affected by whether participants believe the content is produced with an AI model. We find consistent evidence of an AI disclosure penalty: Participant evaluations of creative writing decrease when they believe writing samples were written by an AI model-or with the help of one-rather than a human author alone, and this effect is mediated by perceived authenticity. The AI disclosure penalty is sticky, persisting across evaluation metrics, contexts, kinds of written content, and multiple interventions derived from prior research aimed at moderating the effect. Our results indicate that AI disclosure penalties about creative writing may be stubbornly difficult to mitigate, at least at this time. (PsycInfo Database Record (c) 2026 APA, all rights reserved).
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.