Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Sanctioned AI as a Pedagogical Tool: a Quasi-Experiment on the Formation of a New Academic Subjectivity
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Introduction . With the spread of generative AI (GenAI) in education, discussions on the “crisis of authorship” intensified. While most measures focus on prohibition, this paper examines how student subjectivity changes when AI is legitimized. It explores how interdictive (prohibition) and prescriptive (mandatory use with verification) attitudes modulate behavioral strategies, academic ethics, and responsibility. Methodology and sources . A controlled quasi-experiment (n = 10) was conducted within bachelor's thesis projects. Students were divided into two didactically isolated groups: mandatory conscious AI use versus an interdictive framework. Data collection included document analysis, reflective surveys, pedagogical observation, and verification metrics. Special attention was paid to correlating formal indicators and subjective interpretations. Results and discussion . Data demonstrate an association between AI legitimization and process-oriented ethics. The prescriptive group declared higher authorship, reduced ethical discomfort, and developed critical verification practices. Conversely, the interdictive group showed uncritical borrowing. Prohibition failed to stimulate autonomous ethical reflection. Legitimized AI catalyzed cognitive activity, transforming subjectivity from task performer to designer of the epistemic environment. Conclusion . Prohibiting GenAI fails to strengthen ethical responsibility, potentially promoting passive trust. Normative AI integration transforms academic subjectivity: the student becomes a “prompt designer”, “model operator”, and “arbiter of knowledge”. This requires rethinking educational practices and fundamental categories of authorship, responsibility, and competence.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.644 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.550 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.061 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.850 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.