OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 20:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Beyond Statistical Fairness: A Systematic Review of Novel Metrics for Identifying Algorithmic Bias in AI-Driven Governance

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Artificial Intelligence (AI) systems are increasingly embedded in public governance for decision-making in areas such as welfare distribution, predictive policing, taxation, immigration, and electoral administration. While these systems promise efficiency and scalability, they also introduce significant risks of algorithmic bias with direct implications for equity, accountability, and democratic legitimacy. This study presents a systematic literature review (SLR) on metrics for identifying algorithmic bias in AI-driven governance models, with a particular emphasis on novel and governance-aware measurement approaches. The review follows PRISMA guidelines and analyzes peer-reviewed journal articles, conference proceedings, and high-impact policy reports published between 2014 and 2025. Literature was sourced from Scopus, Web of Science, IEEE Xplore, ACM Digital Library, and SpringerLink using structured search strings related to algorithmic bias, fairness metrics, and AI governance. After a multi-stage screening and eligibility process, the selected studies were subjected to qualitative thematic synthesis and comparative analysis. The results reveal that traditional statistical fairness metrics such as demographic parity, equalized odds, and predictive parity are widely used but insufficient for governance contexts due to their lack of contextual, temporal, and institutional sensitivity. The review identifies and classifies emerging bias metrics into five major categories: causal metrics, intersectional metrics, temporal and dynamic metrics, structural–institutional metrics, and explainability-driven indicators. These novel metrics demonstrate stronger alignment with governance principles, particularly in addressing power asymmetries, historical discrimination, and policy constraints. The study contributes a consolidated taxonomy of bias metrics and proposes an integrated, multi-dimensional framework for evaluating algorithmic bias in AI-driven governance systems. The findings offer practical guidance for policymakers, regulators, and system designers, while highlighting critical research gaps related to standardization, empirical validation, and Global South governance contexts.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationDigital Economy and Work Transformation
Volltext beim Verlag öffnen