Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The SAGE framework for developing critical thinking and responsible generative AI use in cybersecurity education
0
Zitationen
2
Autoren
2025
Jahr
Abstract
The rapid advancement of Generative Artificial Intelligence (GenAI) has introduced new opportunities for transforming higher education, particularly in fields requiring critical analysis and regulatory compliance, such as cybersecurity management. This study introduces the Structured AI Guided Education (SAGE) framework, which integrates generative AI responsibly to cultivate critical thinking in cybersecurity education and offers systematic, ready-to-adopt implementation blueprints. The implementation strategy followed a two-stage approach, embedding GenAI within tutorial exercises and assessment tasks. Tutorials enabled students to generate, critique, and refine AI-assisted cybersecurity policies, whilst assessments required them to apply AI-generated outputs within real-world industry scenarios, ensuring alignment with academic standards and regulatory requirements. The research provides practical blueprints for curriculum design, tutorial structure, and assessment methodologies that enable educators to leverage GenAI whilst maintaining academic rigour and developing critical thinking competencies. Findings indicate that AI-assisted learning significantly enhanced students’ ability to evaluate security policies, refine risk assessments, and bridge theoretical knowledge with practical application. Student reflections and instructor observations revealed improvements in analytical engagement, yet challenges emerged regarding AI dependence, variability in AI literacy, and contextual limitations of AI-generated content. Through structured intervention and research-driven refinement, students experienced AI’s strengths as a generative tool while recognising the importance of human oversight and critical evaluation. This study contributes a replicable pedagogical model that addresses practical challenges of GenAI integration. It also offers insights into best practices for responsible AI use in cybersecurity education, emphasising the necessity of balancing automation with expert judgment to cultivate industry-ready professionals.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.541 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.395 Zit.
Fairness through awareness
2012 · 3.270 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.