Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Beyond Calibration: Rethinking Algorithmic Fairness through an Intersectional, Justice-Aware Lens
0
Zitationen
5
Autoren
2025
Jahr
Abstract
As predictive algorithms increasingly guide high-stakes decisions in fields like criminal justice, healthcare, and finance, the concept of "fairness" often centers on the idea of model calibration, the alignment between predicted probabilities and observed outcomes. Calibration is typically treated as a reliable marker of objectivity and fairness. However, this paper argues that in contexts shaped by structural inequalities, including those based on gender, race, and class, calibration fails to account for deeper ethical and social implications. Drawing on research from algorithmic fairness, feminist technology studies, and intersectionality, we challenge the assumption that models that are calibrated to biased outcomes can be considered fair. This critique is especially urgent for individuals at the intersection of multiple marginalized identities, whose experiences with technology are often shaped by compounded, gendered harms that traditional fairness metrics fail to address. We propose a justice-aware framework for algorithmic fairness that acknowledges the historical and social contexts embedded in data and integrates technical interventions across the AI development lifecycle, before, during, and after model deployment. Rather than treating calibration as an ultimate standard for fairness, we argue it should be viewed as a single tool within a broader, intersectional approach. Our paper makes three key contributions: (1) a conceptual critique of calibration as a fairness metric, (2) a call for intersectional, multi-attribute fairness frameworks that account for gender and other identity factors, and (3) an argument for embedding fairness-enhancing tools within a broader socio-technical and justice-oriented framework that goes beyond mere technical performance to address systemic inequality. This paper addresses that gap by offering a justice-aware framework that integrates technical fairness interventions with gender-conscious design, participatory governance, and socio-technical accountability, bridging the divide between algorithmic fairness and the lived realities of marginalized groups.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.634 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.876 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.448 Zit.
Fairness through awareness
2012 · 3.294 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.