Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Talking Abortion (Mis)information with ChatGPT on TikTok
3
Zitationen
5
Autoren
2023
Jahr
Abstract
In this study, we tested users' perception of accuracy and engagement with TikTok videos in which ChatGPT responded to prompts about "at-home" abortion remedies. The chatbot's responses, though somewhat vague and confusing, nonetheless recommended consulting with health professionals before attempting an "at-home" abortion. We used ChatGPT to create two TikTok video variants - one where users can see ChatGPT explicitly typing back a response, and one where the text response is presented without any notion to the chatbot. We randomly exposed 100 participants to each variant and found that the group of participants unaware of ChatGPT's text synthetization was more inclined to believe the responses were misinformation. Under the same impression, TikTok itself attached misinformation warning labels ("Get the facts about abortion") to all videos after we collected our initial results. We then decided to test the videos again with another set of 50 participants and found that the labels did not affect the perceptions of abortion misinformation except in the case where ChatGPT explicitly responded to a prompt for a lyrical output. We also found that more than 60% of the participants expressed negative or hesitant opinions about chatbots as sources of credible health information.
Ähnliche Arbeiten
The spread of true and false news online
2018 · 8.077 Zit.
What is Twitter, a social network or a news media?
2010 · 6.666 Zit.
Social Media and Fake News in the 2016 Election
2017 · 6.426 Zit.
Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception
1983 · 6.263 Zit.
The Matthew Effect in Science
1968 · 6.165 Zit.