Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
"That's another doom I haven't thought about": A User Study on AI Labels as a Safeguard Against Image-Based Misinformation
0
Zitationen
9
Autoren
2025
Jahr
Abstract
As generative AI is increasingly contributing to the spread of deceptively realistic misinformation, lawmakers have introduced regulations requiring the disclosure of AI-generated content. However, it is unclear if labels reduce the risk of users falling for AI-generated misinformation. To address this research gap, we study the effect of labels on users' perception and the implications of mislabeling, focusing on AI-generated images. We first explored users' opinions and expectations of labels using five focus groups. Although participants were wary of practical implementations, they considered labeling helpful in identifying AI-generated images and avoiding deception. Second, we conducted a survey with 1354 participants to assess how labels affect users' ability to recognize misinformation. While labels reduced participants' belief in false claims supported by AI-generated images, we found evidence of overreliance, leading to unintended side effects: Participants were more susceptible to false claims accompanied by human-made images, and were more hesitant to believe true claims illustrated with labeled AI-generated images.
Ähnliche Arbeiten
The spread of true and false news online
2018 · 7.968 Zit.
What is Twitter, a social network or a news media?
2010 · 6.630 Zit.
Social Media and Fake News in the 2016 Election
2017 · 6.385 Zit.
Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception
1983 · 6.250 Zit.
The Matthew Effect in Science
1968 · 6.120 Zit.