OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.05.2026, 22:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Algorithmic Bias in Automated Decision-Making: A Statistical Study with Legal and Regulatory Implications

2026·0 Zitationen·Digital Technologies Research and ApplicationsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2026

Jahr

Abstract

The use of algorithmic decision systems is being expanded to high-risk areas like credit, recruiting, and distributing government resources. Despite the fact that these systems are usually claimed to be objective and efficient, there have been apprehensions about the likelihood of structural inequalities being perpetuated by the systems. This paper examines the effect of a fairness-aware pre-processing technique called reweighing on the performance of a predictive system in a controlled simulation environment. Using a synthetically created credit approval dataset with structural disadvantage embedded, we compare the performance of a logistic regression classifier with and without reweighing. Fairness is measured using demographic parity disparity (DPD), disparate impact ratio (DIR), and equalized odds difference (EO), along with predictive accuracy. In a single test scenario (seed = 42), reweighing does not improve all fairness metrics uniformly. However, when analyzed for robustness across 50 independent random seeds, we find modest average reductions in demographic parity disparity and equalized odds difference for reweighing, with little change in predictive accuracy. Threshold sensitivity analysis also shows that fairness metrics are sensitive to decision thresholds. These results show that fairness-aware pre-processing can lead to systematic improvements in expectation, although trade-offs across fairness metrics and performance remain context-dependent.

Ähnliche Arbeiten