Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI literacy, mindful learning engagement, and academic integrity: An explanatory sequential mixed-methods study on critical thinking development
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The rapid integration of generative artificial intelligence (AI) in higher education has created both transformative learning opportunities and serious concerns regarding academic integrity and the erosion of critical thinking; however, empirical evidence explaining how AI literacy shapes ethical engagement and higher-order thinking remains limited, particularly within developing country contexts. This study aims to examine the influence of generative AI literacy on students’ critical thinking skills, with academic integrity and mindful learning engagement positioned as mediating variables. Employing an explanatory sequential mixed-methods design, the research involved 115 undergraduate students who completed a 32-item survey analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM), followed by in-depth interviews with 10 purposively selected participants to enrich interpretation. The findings reveal that generative AI literacy significantly predicts academic integrity, mindful learning engagement, and critical thinking. Academic integrity serves as a strong positive mediator, reinforcing ethical reasoning and evaluative judgment, while mindful learning engagement demonstrates a more complex role, indicating that engagement quality, rather than intensity alone, determines its contribution to critical thinking. The study reconceptualizes AI literacy as a multidimensional construct encompassing technical, ethical, and reflective competencies. Practically, it highlights the necessity of integrating ethically grounded and mindfulness-based AI pedagogies to ensure that generative AI enhances, rather than replaces, students’ critical thinking in digitally mediated higher education environments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.560 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.451 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.