OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.04.2026, 01:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Beyond Calibration: Rethinking Algorithmic Fairness through an Intersectional, Justice-Aware Lens

2025·0 Zitationen·ScholarSpace (University of Hawaii at Manoa)
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

As predictive algorithms increasingly guide high-stakes decisions in fields like criminal justice, healthcare, and finance, the concept of "fairness" often centers on the idea of model calibration, the alignment between predicted probabilities and observed outcomes. Calibration is typically treated as a reliable marker of objectivity and fairness. However, this paper argues that in contexts shaped by structural inequalities, including those based on gender, race, and class, calibration fails to account for deeper ethical and social implications. Drawing on research from algorithmic fairness, feminist technology studies, and intersectionality, we challenge the assumption that models that are calibrated to biased outcomes can be considered fair. This critique is especially urgent for individuals at the intersection of multiple marginalized identities, whose experiences with technology are often shaped by compounded, gendered harms that traditional fairness metrics fail to address. We propose a justice-aware framework for algorithmic fairness that acknowledges the historical and social contexts embedded in data and integrates technical interventions across the AI development lifecycle, before, during, and after model deployment. Rather than treating calibration as an ultimate standard for fairness, we argue it should be viewed as a single tool within a broader, intersectional approach. Our paper makes three key contributions: (1) a conceptual critique of calibration as a fairness metric, (2) a call for intersectional, multi-attribute fairness frameworks that account for gender and other identity factors, and (3) an argument for embedding fairness-enhancing tools within a broader socio-technical and justice-oriented framework that goes beyond mere technical performance to address systemic inequality. This paper addresses that gap by offering a justice-aware framework that integrates technical fairness interventions with gender-conscious design, participatory governance, and socio-technical accountability, bridging the divide between algorithmic fairness and the lived realities of marginalized groups.

Ähnliche Arbeiten

Autoren

Themen

Ethics and Social Impacts of AIDigital Economy and Work TransformationArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen