Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Determinants of ChatGPT Adoption in Academe & Other Fields – A Review on Theoretical Perspective
2
Zitationen
2
Autoren
2024
Jahr
Abstract
ChatGPT has been showing promising advantages including its capability to optimize work and converse like human being. In the academe, ChatGPT was seen to have the capability to answer formative assessments, aid in research, and act as virtual tutor. However, ChatGPT is also being criticized for its misleading and inaccurate responses. This led the scientific community to further study its adoption factors. This review discussed and analyzed 53 empirical studies that aimed to determine the factors influencing ChatGPT adoption and use in the academe and other fields. Performance expectancy, personal innovativeness, trust, attitude, and self-efficacy were identified as common determinants of ChatGPT adoption in various fields. To add, experience and presence of Generative Al policy also determine ChatGPT adoption. Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT/UTUAT2) are the most widely used frameworks found in this review. Practically, this review recommends that ChatGPT adoption and use be further studied in educational sector focusing on the contrasting results of significant factors found. Policy on how academic institutions will adopt and use ChatGPT is also highly recommended. With respect to other areas, studies on ChatGPT adoption and use in other economic institutions (healthcare, business, law, software development, dentistry, etc.) are recommended. Theoretically, this review recommends use of TAM and UTUAT/UTUAT2 in future studies of ChatGPT adoption considering personal innovativeness, trust, and self-efficacy as extension constructs and focusing on experience and policy as moderating constructs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.