Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Student Perceptions of AI in Learning: The Role of Credibility and Emo-tional Well-Being in Supporting Critical Thinking Skills
0
Zitationen
5
Autoren
2025
Jahr
Abstract
The growing use of artificial intelligence (AI) tools (e.g., ChatGPT, Grammarly) in higher education is often claimed to enhance students’ critical thinking, yet perceived benefits remain inconsistent and may depend more on trust and affective experience than on technical features alone. This study aimed to examine students’ perceptions of AI for supporting critical thinking by testing five predictors—perceived AI credibility, AI quality, cognitive absorption, emotional well-being, and satisfaction—and their effects on overall AI perception. A quantitative cross-sectional survey was administered to 90 Indonesian university students (purposive sampling; ages 18–25) using 26 closed-ended Likert items (5-point scale) and three open-ended questions; data were analyzed in Jamovi using descriptive statistics, Pearson correlations, and multiple linear regression. The results indicated generally moderate perceptions of AI (item means ≈2.2–2.8), significant positive correlations among all variables (p < .001), and strong explanatory power of the regression model (R² = 0.737; adjusted R² = 0.720). In the multivariate model, emotional well-being (β_std = 0.267, p = 0.016) and AI credibility (β_std = 0.196, p = 0.043) were the only significant predictors, whereas AI quality, cognitive absorption, and satisfaction showed positive but non-significant effects. These findings imply that AI-supported learning interventions should prioritize credible, trustworthy AI outputs and pedagogical designs that promote positive emotional experiences (e.g., comfort, reduced stress, motivation) to strengthen perceived critical-thinking benefits; overall, affective and trust-related factors appear to be central drivers of students’ positive AI perceptions, warranting validation in larger and longitudinal studies
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.445 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.325 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.761 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.530 Zit.