OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 13:57

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Approaching Machine Learning Fairness through Adversarial Network

2019·8 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

8

Zitationen

2

Autoren

2019

Jahr

Abstract

Fairness is becoming a rising concern w.r.t. machine learning model performance. Especially for sensitive fields such as criminal justice and loan decision, eliminating the prediction discrimination towards a certain group of population (characterized by sensitive features like race and gender) is important for enhancing the trustworthiness of model. In this paper, we present a new general framework to improve machine learning fairness. The goal of our model is to minimize the influence of sensitive feature from the perspectives of both the data input and the predictive model. In order to achieve this goal, we reformulate the data input by removing the sensitive information and strengthen model fairness by minimizing the marginal contribution of the sensitive feature. We propose to learn the non-sensitive input via sampling among features and design an adversarial network to minimize the dependence between the reformulated input and the sensitive information. Extensive experiments on three benchmark datasets suggest that our model achieve better results than related state-of-the-art methods with respect to both fairness metrics and prediction performance.

Ähnliche Arbeiten

Autoren

Themen

Ethics and Social Impacts of AIAdversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen