OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.05.2026, 23:30

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The regulatory landscape of generative AI in education and the future of student psychological well-being — a comparative analysis of EU and Chinese legislation

2026·0 Zitationen·Frontiers in EducationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Introduction As Generative AI deeply intervenes in personalized education systems as a core driving force, the subtle, cumulative psychological harm it may cause to minors (such as triggering automation bias and learned helplessness) has evolved into an urgent regulatory issue of global prominence. Although major economies are actively building regulatory frameworks, there is still a lack of systematic evaluation regarding the adequacy of these frameworks in protecting the psychological development of students. Methods This study employs Doctrinal and comparative legal analysis methods to evaluate the efficacy of the EU AI Act and China's Interim Measures for the Management of Generative Artificial Intelligence Services in safeguarding student mental health. The research constructed a specialized coding dictionary to map specific psychological risk dimensions onto provisions in existing regulations to identify potential legal blind spots. Results Analysis indicates that both jurisdictions exhibit a structural “Regulatory Mismatch”. The EU's “risk-based” approach effectively regulates high-stakes decision-making (such as automated grading systems), but completely overlooks the cumulative psychological risks latent in “limited risk” conversational agents (like daily companion chatbots). Conversely, China's “content-based” strategy successfully filters explicit illegal information, but cannot effectively regulate lawful yet psychologically manipulative adaptive interaction designs in hybrid AI systems (e.g., addictive, high-pressure gamification mechanisms). Discussion Current global regulatory frameworks overly prioritize physical safety, data privacy, and ideological security, while severely marginalizing psychological “process safety”. Policymakers must immediately introduce mandatory “Mental Health Impact Assessments” (MHIA) for educational tools, and expand algorithm transparency requirements, shifting from mere “technical explainability” to a complete disclosure of “algorithmic pedagogical logic”.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsDigital Mental Health Interventions
Volltext beim Verlag öffnen