Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing and Mitigating Intersectional Bias: A Multi-Domain Study on Race and Gender
0
Zitationen
5
Autoren
2025
Jahr
Abstract
The use of algorithms to aid in decision-making is growing in important fields like healthcare, education, employment, criminal justice etc. However, there is a concern that these algorithms might treat some demographic groups unfairly, especially when biases result from overlapping attributes, like gender and race. In this paper, four real-world datasets— COMPAS, Adult Income, Student Performance, and Heart Disease have been used to do a cross-domain investigation of bias. Demographic Parity Difference (DPD), Disparate Impact (DI), Average Odds Difference (AOD), Equal Opportunity Difference (EOD), False Positive Rate (FPR), and False Negative Rate (FNR) Gap- six commonly accepted fairness measures, are used to assess bias across race, gender, and race-gender intersections. We have applied the Exponentiated Gradient (EG) method with Equalized Odds constraints from the Fairlearn framework to mitigate intersectional bias. The results indicate that bias levels dropped notably while predictive performance remained strong. This underscores the need to address intersectional bias and demonstrates that fairness-aware algorithms are crucial for achieving a balance between accuracy, equity, and accountability in real-world systems.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.504 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.856 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.377 Zit.
Fairness through awareness
2012 · 3.267 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.