Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Original or fake? Value attributed to text-based archives generated by artificial intelligence.
4
Zitationen
3
Autoren
2022
Jahr
Abstract
Openly available natural language generation (NLG) algorithms can generate human-like texts across multiple domains. Given the increasing potential of NLG algorithms, many ethical challenges arise such as being used as a tool for misinformation. It is necessary to understand not just how these texts are generated from an algorithmic point of view, but also how they are evaluated by a general audience. In the current study, our aim was to shed light on how people react to texts generated algorithmically, whether they are indistinguishable from original (human-generated) texts, and the value people assign these texts in a rapidly automated world. Using original text-based archives, and fake text-based archives generated by artificial intelligence (AI), findings from our pre-registered, statistically powered study (N=228) revealed that people assigned lower values to archives that were AI-generated compared to original archives. Although participants were unable to accurately distinguish between AI generated and original archives, original archives were more likely to be preserved than AI-generated archives. This bias against AI archives persisted when people were aware of the archive’s source and when they were not, as well as when they categorised the archive as AI-generated (even though it may not be). People’s judgements of value were also influenced by their attitudes toward AI. These findings provide a richer understanding of how the emergent practice of automated text content creation alters the practices of readers and writers alike and have implications for how readers’ attitudes toward AI affect the use and value of AI-based applications and creations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.