Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Too Easy to Resist? How Perceived Ease, Usefulness, and Ethics Drive ChatGPT Adoption in Higher Education
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Academic integrity is a cornerstone of higher education, built on values like honesty, fairness, respect, responsibility, and courage (Fishman, 2014). In today's academic setting, the issue of authorship and originality has become more complicated due to the increasing role of AI in content creation. Students may be tempted to use AI tools to complete essays or assignments, raising concerns about the authenticity of their work. The Technology Acceptance Model (TAM), introduced by Davis (1989), is a widely used framework for examining how users interact with new technologies. TAM is based on the Theory of Reasoned Action (TRA) (Ajzen, 1991). Its primary goal is to understand the factors that influence technology acceptance and to provide a theoretical basis for successful technology implementation. Practically, TAM aims to predict user behavior and propose measures for technology adoption before it is introduced (Marikyan & Papagiannidis, 2023). According to TAM, two key factors influence whether a new technology will be accepted: perceived usefulness and perceived ease of use. Perceived usefulness refers to the belief that using a particular system will enhance job performance, while perceived ease of use relates to the belief that using the system will require minimal effort (Davis, 1989). TAM has been validated in numerous studies in the educational context (Abdullah & Ward, 2016, Dahri et al., 2024, Granić & Maragunić, 2019, Obenza et al., 2024, Rahman et al., 2023, Shaengchart, 2023, Sherer et al., 2019). While previous research has shown that perceived usefulness and perceived ease of use are critical in shaping students' attitudes toward using AI tools like ChatGPT for learning, there is still a gap in research exploring how students perceive ChatGPT as a potential tool for academic dishonesty. JEL Codes: I23, O33, D83 Keywords: Academic dishonesty, AI in education, ChatGPT, Perceived Risk and Benefit Theory, TAM.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.