Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
What students really think: unpacking AI ethics in educational assessments through a triadic framework
3
Zitationen
3
Autoren
2025
Jahr
Abstract
Abstract The rise of AI in educational assessments has significantly enhanced efficiency and accuracy. However, it also introduces critical ethical challenges, including bias in grading, data privacy risks, and accountability gaps. These issues can undermine trust in AI-driven assessments and compromise educational fairness, making a structured ethical framework essential. To address these challenges, this study empirically validates an existing triadic ethical framework for AI-assisted educational assessments, originally proposed by Lim, Gottipati and Cheong (In: Keengwe (ed) Creative AI tools and ethical implications in teaching and learning, IGI Global, 2023), grounded in student perceptions. The framework encompasses three ethical domains—physical, cognitive, and informational—which intersect with five key assessment pipeline stages: system design, data stewardship, assessment construction, administration, and grading. By structuring AI-driven assessments within this ethical framework, the study systematically maps key concerns, including fairness, accountability, privacy, and academic integrity. To validate the proposed framework, Structural Equation Modeling (SEM) was employed to examine its relevance and alignment with learners' ethical concerns. Specifically, the study aims to (1) evaluate how well the triadic framework aligns with learners' perceptions of ethical issues using SEM analysis, and (2) examine relationships among the assessment pipeline stages, ethical considerations, pedagogical outcomes, and learner experiences. Findings reveal robust connections between AI-assisted assessment stages, ethical concerns, and learners' perspectives. By bridging theoretical validation with practical insights, this study emphasizes actionable strategies to support the development of AI-driven assessment systems that balance technological efficiency, pedagogical effectiveness, and ethical responsibility.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.