Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Standardized Assessment of Artificial Intelligence Literacy: Development and Validation of the Multidimensional AI Literacy Competency Scale (MAIL-CS)
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Generative AI’s rapid diffusion demands precise, up-to-date measures of AI literacy. This study develops and validates the Multidimensional AI Literacy Competency Scale (MAIL-CS), designed specifically for the GenAI era. Using a large sample of Chinese university students (N=850) and a split-sample design, we conducted EFA and CFA to establish a robust four-factor structure—Foundational Knowledge & Ethics, Operational Skills, Critical Evaluation, and Application & Innovation. The best-fitting model showed strong fit indices, and the 32-item scale demonstrated high internal consistency (Cronbach’s α and McDonald’s ω ≥ .82 subscales; α=.91, ω=.92 total). Convergent validity was supported by positive correlations with digital literacy and critical thinking; discriminant validity was evidenced by negligible relations with Big Five traits. MAIL-CS offers educators, researchers, and policymakers a reliable instrument to diagnose competency gaps, evaluate interventions, and inform curriculum and strategy. Validation in a non-Western context provides a foundation for cross-cultural assessment and future invariance testing.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.