Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Believe in Artificial Intelligence? A User Study on the ChatGPT’s Fake Information Impact
44
Zitationen
5
Autoren
2023
Jahr
Abstract
Technological evolution has enabled the development of new artificial intelligence (AI) models with generative capabilities. Among them, one of the most discussed is the virtual agent ChatGPT. This chatbot may occasionally produce fake information, as also declared by the producer OpenAI. Such a model may provide very useful support in several tasks, ranging from text summarization to programming. The research community has marginally investigated the impact that fake information created by AI models has on the users’ perceptions and on their belief in AI. We analyzed the impact of the fake information produced by AI on user perceptions, specifically trust and satisfaction, by performing a user study on ChatGPT. An additional issue is assessing whether the early or late knowledge of the possibility of the tool generating fake information has a different impact on the users’ perceptions. We conducted an experiment, involving 62 university students, a category of users who may employ tools such as ChatGPT extensively. The experiment consisted of a guided interaction with ChatGPT. Some of the participants experienced the failure of the chatbot, while a control group only received correct and reliable answers. We collected participants’ perceptions of trust, satisfaction, and usability, together with the net promoter score (NPS). The results demonstrated a statistically significant difference in trust and satisfaction between the users who early experienced fake information production compared to those who discovered ChatGPT’s faulty behaviors later during the interaction. Also, there is no statistically significant difference among the users who received the late fake information and the control group (no fake information). Usability and the NPS also resulted higher when the fake news was detected in the late interaction. When users are aware of the fake information generated by ChatGPT their trust and satisfaction decrease, especially when they impact on this at the early stage of use of the chatbot. Nevertheless, the perception of trust and satisfaction still remains high, as some of the users are still enthusiastic; others consider a more conscious use of the tool in terms of support to be verified. A useful strategy could be to favor a critical use of ChatGPT, letting young people to verify the provided information. This should be a new way to perform learning activities.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.