Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Applied AI and Pedagogical Judgment in Multilingual Teaching and Learning: Bringing EQUAL AI to Life
0
Zitationen
1
Autoren
2025
Jahr
Abstract
In recent years, there have been a variety of ethical frameworks proposed for utilizing AI technologies in higher education. A pervasive issue across many of these frameworks is that they do not adequately bridge the gap between theoretical principles and practical application. Several frameworks articulate their underlying theoretical values; however, very few provide concrete examples of how these values are embodied, negotiated, or resisted within day-to-day instructional environments. This article addresses this gap by examining the implementation of the EQUAL AI framework, as proposed by Davoodi in 2024, within a multilingual graduate education context. Utilizing a narrative inquiry approach, this study examines three classroom cases drawn from a graduate-level research methodology course that illustrate the ways AI has transformed pedagogical tensions associated with language, authorship, confidence, and ethical accountability. The findings emphasize the need to view AI use as a function of instructional judgment rather than solely as a compliance issue or a neutral pedagogical tool. Ethical accountability in AI-mediated learning depends on the sustained visibility of students’ intellectual labor, as well as on pedagogical strategies that clarify, repair, and reframe AI use as learning unfolds. This article further demonstrates that conceptual frameworks such as EQUAL AI derive pedagogical power only when enacted at the local level through context-sensitive instructional decision-making processes embedded in lived teaching experiences. As such, this study contributes to research on AI in higher education by reconceptualizing AI integration as an adaptive, human-centered pedagogical process within multilingual educational settings, where issues of academic identity, linguistic vulnerability, and legitimacy are continually negotiated.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.