Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Visible sources and invisible risks: exploring the impact of AI disclosure on perceived credibility of AI-generated content
0
Zitationen
2
Autoren
2026
Jahr
Abstract
With the widespread use of AI-generated content (AIGC) on social media, its potential to spread misinformation poses threats to the public. Although AI disclosure is widely promoted as a transparency measure to prompt critical evaluation, its effectiveness in science communication remains controversial. This study conducted a within-subjects experiment (N = 433) to examine how AI disclosure affects perceived credibility of science communication texts and the moderating roles of readers' negative attitudes towards AI and audience involvement. The experiment manipulated AI disclosure labels and information veracity. The results revealed a truth-falsity crossover effect: AI disclosure significantly reduced the perceived credibility of correct information while unexpectedly increasing the perceived credibility of misinformation. Negative attitudes towards AI significantly moderated these effects, whereas audience involvement exerted only limited influence. These findings highlight the complex and sometimes counterproductive consequences of AI disclosure in science communication and suggest implications for cue-based processing, algorithm aversion, and the design of disclosure mechanisms.
Ähnliche Arbeiten
The spread of true and false news online
2018 · 8.148 Zit.
What is Twitter, a social network or a news media?
2010 · 6.668 Zit.
Social Media and Fake News in the 2016 Election
2017 · 6.457 Zit.
Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception
1983 · 6.273 Zit.
The Matthew Effect in Science
1968 · 6.192 Zit.