OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 00:38

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

How do the existing fairness metrics and unfairness mitigation algorithms contribute to ethical learning analytics?

2022·56 Zitationen·British Journal of Educational Technology
Volltext beim Verlag öffnen

56

Zitationen

6

Autoren

2022

Jahr

Abstract

Abstract With the widespread use of learning analytics (LA), ethical concerns about fairness have been raised. Research shows that LA models may be biased against students of certain demographic subgroups. Although fairness has gained significant attention in the broader machine learning (ML) community in the last decade, it is only recently that attention has been paid to fairness in LA. Furthermore, the decision on which unfairness mitigation algorithm or metric to use in a particular context remains largely unknown. On this premise, we performed a comparative evaluation of some selected unfairness mitigation algorithms regarded in the fair ML community to have shown promising results. Using a 3‐year program dropout data from an Australian university, we comparatively evaluated how the unfairness mitigation algorithms contribute to ethical LA by testing for some hypotheses across fairness and performance metrics. Interestingly, our results show how data bias does not always necessarily result in predictive bias. Perhaps not surprisingly, our test for fairness‐utility tradeoff shows how ensuring fairness does not always lead to drop in utility. Indeed, our results show that ensuring fairness might lead to enhanced utility under specific circumstances. Our findings may to some extent, guide fairness algorithm and metric selection for a given context. Practitioner notes What is already known about this topic LA is increasingly being used to leverage actionable insights about students and drive student success. LA models have been found to make discriminatory decisions against certain student demographic subgroups—therefore, raising ethical concerns. Fairness in education is nascent. Only a few works have examined fairness in LA and consequently followed up with ensuring fair LA models. What this paper adds A juxtaposition of unfairness mitigation algorithms across the entire LA pipeline showing how they compare and how each of them contributes to fair LA. Ensuring ethical LA does not always lead to a dip in performance. Sometimes, it actually improves performance as well. Fairness in LA has only focused on some form of outcome equality, however equality of outcome may be possible only when the playing field is levelled. Implications for practice and/or policy Based on desired notion of fairness and which segment of the LA pipeline is accessible, a fairness‐minded decision maker may be able to decide which algorithm to use in order to achieve their ethical goals. LA practitioners can carefully aim for more ethical LA models without trading significant utility by selecting algorithms that find the right balance between the two objectives. Fairness enhancing technologies should be cautiously used as guides—not final decision makers. Human domain experts must be kept in the loop to handle the dynamics of transcending fair LA beyond equality to equitable LA.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen