Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The regulatory landscape of generative AI in education and the future of student psychological well-being — a comparative analysis of EU and Chinese legislation
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Introduction As Generative AI deeply intervenes in personalized education systems as a core driving force, the subtle, cumulative psychological harm it may cause to minors (such as triggering automation bias and learned helplessness) has evolved into an urgent regulatory issue of global prominence. Although major economies are actively building regulatory frameworks, there is still a lack of systematic evaluation regarding the adequacy of these frameworks in protecting the psychological development of students. Methods This study employs Doctrinal and comparative legal analysis methods to evaluate the efficacy of the EU AI Act and China's Interim Measures for the Management of Generative Artificial Intelligence Services in safeguarding student mental health. The research constructed a specialized coding dictionary to map specific psychological risk dimensions onto provisions in existing regulations to identify potential legal blind spots. Results Analysis indicates that both jurisdictions exhibit a structural “Regulatory Mismatch”. The EU's “risk-based” approach effectively regulates high-stakes decision-making (such as automated grading systems), but completely overlooks the cumulative psychological risks latent in “limited risk” conversational agents (like daily companion chatbots). Conversely, China's “content-based” strategy successfully filters explicit illegal information, but cannot effectively regulate lawful yet psychologically manipulative adaptive interaction designs in hybrid AI systems (e.g., addictive, high-pressure gamification mechanisms). Discussion Current global regulatory frameworks overly prioritize physical safety, data privacy, and ideological security, while severely marginalizing psychological “process safety”. Policymakers must immediately introduce mandatory “Mental Health Impact Assessments” (MHIA) for educational tools, and expand algorithm transparency requirements, shifting from mere “technical explainability” to a complete disclosure of “algorithmic pedagogical logic”.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.593 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.483 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.003 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.824 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.