Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative Artificial Intelligence Acceptance Scale: A Validity and Reliability Study
164
Zitationen
3
Autoren
2023
Jahr
Abstract
The purpose of this study is to formulate an acceptance scale grounded in the Unified Theory of Acceptance and Use of Technology (UTAUT) model. The scale is designed to scrutinize students' acceptance of generative artificial intelligence (AI) applications. This tool assesses students' acceptance levels toward generative AI applications. The scale development study was conducted in three phases, encompassing 627 university students from various faculties who have utilized generative AI tools such as ChatGPT during the 2022–2023 academic year. To evaluate the face and content validity of the scale, input was sought from professionals with expertise in the field. The initial sample group (n = 338) underwent exploratory factor analysis (EFA) to explore the underlying factors, while the subsequent sample group (n = 250) underwent confirmatory factor analysis (CFA) for the verification of factor structure. Later, it was seen that four factors comprising 20 items accounted for 78.349% of total variance due to EFA. CFA results confirmed that structure of the scale, featuring 20 items and four factors (performance expectancy, effort expectancy, facilitating conditions, and social influence), was compatible with the obtained data. Reliability analysis yielded Cronbach's alpha coefficient of 0.97, and the test–retest method demonstrated a reliability coefficient of 0.95. To evaluate the discriminative power of the items, a comparative analysis was conducted between the lower 27% and upper 27% of participants, with subsequent calculation of corrected item-total correlations. The results demonstrate that the generative AI acceptance scale exhibits robust validity and reliability, thus affirming its effectiveness as a robust measurement instrument.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.