Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Talking Abortion (Mis)information with ChatGPT on TikTok
8
Zitationen
5
Autoren
2023
Jahr
Abstract
In this study, we tested users’ perception of accuracy and engagement with TikTok videos in which ChatGPT responded to prompts about “at-home” abortion remedies. The chatbot’s responses, though somewhat vague and confusing, nonetheless recommended consulting with health professionals before attempting an “at-home” abortion. We used ChatGPT to create two TikTok video variants - one where users can see ChatGPT explicitly typing back a response, and one where the text response is presented without any notion to the chatbot. We randomly exposed 100 participants to each variant and found that the group of participants unaware of ChatGPT’s text synthetization was more inclined to believe the responses were misinformation. Under the same impression, TikTok itself attached misinformation warning labels (Get the facts about abortion”) to all videos after we collected our initial results. We then decided to test the videos again with another set of 50 participants and found that the labels did affect the perceptions of abortion misinformation. We also found that more than 60% of the participants expressed negative or hesitant opinions about chatbots as sources of credible health information.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.