Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Ineffectiveness of Instructor-Level GenAI-Use Policies
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Late 2022 saw the release of ChatGPT, the first of many powerful generative artificial intelligence (GenAI) tools that quickly transformed the way people work and study. As Japanese universities were cognizant of GenAI’s increasing societal impact and its potential application in educational endeavors, they overwhelmingly left the responsibility for establishing and enforcing GenAI and other digital tools (DTs) usage policies in individual courses to instructors. Allowing instructors this free hand in determining GenAI’s role in their courses seems both sound and appropriate, as students primarily engage with course objectives, materials, and pedagogies through their instructors. Moreover, instructors are best positioned to monitor and discourage students’ misuse of GenAI/DTs. However, it is argued here that given GenAI’s unprecedented power and allure, leaving GenAI policy monitoring solely to instructors is not only ineffectual as a cheating deterrent but also burdensome, as it imposes numerous additional demands on instructors’ time and energy. The current article aims to support this argument by providing analyses spanning a ten-year period of course failures resulting directly from academic misconduct through GenAI/DT misuse in English-only research reports submitted by Japanese university students enrolled in an English language lecture course. Analyses reveal a dramatic increase in such course failures post-ChatGPT launch, specifically indicating the ineffectiveness of individual instructors in establishing and monitoring course-level GenAI policies. The subsequent additional burdens experienced by the course instructor are discussed, and a call is made for increased institutional support.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.