OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 06:43

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The SAGE framework for developing critical thinking and responsible generative AI use in cybersecurity education

2025·0 Zitationen·Discover EducationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

The rapid advancement of Generative Artificial Intelligence (GenAI) has introduced new opportunities for transforming higher education, particularly in fields requiring critical analysis and regulatory compliance, such as cybersecurity management. This study introduces the Structured AI Guided Education (SAGE) framework, which integrates generative AI responsibly to cultivate critical thinking in cybersecurity education and offers systematic, ready-to-adopt implementation blueprints. The implementation strategy followed a two-stage approach, embedding GenAI within tutorial exercises and assessment tasks. Tutorials enabled students to generate, critique, and refine AI-assisted cybersecurity policies, whilst assessments required them to apply AI-generated outputs within real-world industry scenarios, ensuring alignment with academic standards and regulatory requirements. The research provides practical blueprints for curriculum design, tutorial structure, and assessment methodologies that enable educators to leverage GenAI whilst maintaining academic rigour and developing critical thinking competencies. Findings indicate that AI-assisted learning significantly enhanced students’ ability to evaluate security policies, refine risk assessments, and bridge theoretical knowledge with practical application. Student reflections and instructor observations revealed improvements in analytical engagement, yet challenges emerged regarding AI dependence, variability in AI literacy, and contextual limitations of AI-generated content. Through structured intervention and research-driven refinement, students experienced AI’s strengths as a generative tool while recognising the importance of human oversight and critical evaluation. This study contributes a replicable pedagogical model that addresses practical challenges of GenAI integration. It also offers insights into best practices for responsible AI use in cybersecurity education, emphasising the necessity of balancing automation with expert judgment to cultivate industry-ready professionals.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIInformation and Cyber SecurityArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen