Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Perceptions of Generative AI among Higher Education Students: Utility, Risks, Cognitive Impact, and Training Needs
0
Zitationen
1
Autoren
2025
Jahr
Abstract
The massive irruption of generative Artificial Intelligence (AI) has marked an unprecedented turning point in the landscape of higher education. This technological phenomenon poses a structural challenge to traditional pedagogical models, compelling academic institutions to urgently re-evaluate both their teaching methods and assessment criteria. In this context of disruption, it becomes imperative to evaluate its real impact and to delineate a pedagogical response that transcends mere prohibition or unregulated use. The present study is framed within this necessity, adopting as its primary objective to analyze in depth the perception of higher education students in the Social Sciences regarding AI. The study focused on three axes of perception: the practical utility of AI, the identification of ethical and academic risks inherent in its use, and the explicit demand for training to manage this tool. The methodology employed corresponded to a quantitative design using a five-dimension Likert-type questionnaire covering the constructs of utility, risk, reliability, cognitive impact, and need for training. The collected data were subjected to inferential analysis using the Student's t-test and the Pearson Correlation Coefficient. The results reveal an adoption driven fundamentally by operational efficiency. The most conclusive finding is the demand for faculty training, which underscores a formative gap. The study emphasizes the urgency of a curricular redefinition that equips both students and faculty to manage risks, overcome skepticism about reliability, and utilize AI as a critical and responsible instrument.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.