Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis (Preprint)
5
Zitationen
2
Autoren
2023
Jahr
Abstract
<sec> <title>BACKGROUND</title> ChatGPT (Chat Generative Pre-trained Transformer) has gained popularity for its ability to generate human-like responses. It is essential to note that overreliance or blind trust in ChatGPT, especially in high-stakes decision-making contexts, can have severe consequences. Similarly, lacking trust in the technology can lead to underuse, resulting in missed opportunities. </sec> <sec> <title>OBJECTIVE</title> This study investigated the impact of users’ trust in ChatGPT on their intent and actual use of the technology. Four hypotheses were tested: (1) users’ intent to use ChatGPT increases with their trust in the technology; (2) the actual use of ChatGPT increases with users’ intent to use the technology; (3) the actual use of ChatGPT increases with users’ trust in the technology; and (4) users’ intent to use ChatGPT can partially mediate the effect of trust in the technology on its actual use. </sec> <sec> <title>METHODS</title> This study distributed a web-based survey to adults in the United States who actively use ChatGPT (version 3.5) at least once a month between February 2023 through March 2023. The survey responses were used to develop 2 latent constructs: <i>Trust</i> and <i>Intent to Use</i>, with <i>Actual Use</i> being the outcome variable. The study used partial least squares structural equation modeling to evaluate and test the structural model and hypotheses. </sec> <sec> <title>RESULTS</title> In the study, 607 respondents completed the survey. The primary uses of ChatGPT were for information gathering (n=219, 36.1%), entertainment (n=203, 33.4%), and problem-solving (n=135, 22.2%), with a smaller number using it for health-related queries (n=44, 7.2%) and other activities (n=6, 1%). Our model explained 50.5% and 9.8% of the variance in <i>Intent to Use</i> and <i>Actual Use</i>, respectively, with path coefficients of 0.711 and 0.221 for <i>Trust</i> on <i>Intent to Use</i> and <i>Actual Use</i>, respectively. The bootstrapped results failed to reject all 4 null hypotheses, with <i>Trust</i> having a significant direct effect on both <i>Intent to Use</i> (β=0.711, 95% CI 0.656-0.764) and <i>Actual Use</i> (β=0.302, 95% CI 0.229-0.374). The indirect effect of <i>Trust</i> on <i>Actual Use</i>, partially mediated by <i>Intent to Use</i>, was also significant (β=0.113, 95% CI 0.001-0.227). </sec> <sec> <title>CONCLUSIONS</title> Our results suggest that trust is critical to users’ adoption of ChatGPT. It remains crucial to highlight that ChatGPT was not initially designed for health care applications. Therefore, an overreliance on it for health-related advice could potentially lead to misinformation and subsequent health risks. Efforts must be focused on improving the ChatGPT’s ability to distinguish between queries that it can safely handle and those that should be redirected to human experts (health care professionals). Although risks are associated with excessive trust in artificial intelligence–driven chatbots such as ChatGPT, the potential risks can be reduced by advocating for shared accountability and fostering collaboration between developers, subject matter experts, and human factors researchers. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.