Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Distributed Fair Machine Learning Framework with Private Demographic Data Protection
1
Zitationen
4
Autoren
2019
Jahr
Abstract
Fair machine learning has become a significant research topic with broad societal impact. However, most fair learning methods require direct access to personal demographic data, which is increasingly restricted to use for protecting user privacy (e.g. by the EU General Data Protection Regulation). In this paper, we propose a distributed fair learning framework for protecting the privacy of demographic data. We assume this data is privately held by a third party, which can communicate with the data center (responsible for model development) without revealing the demographic information. We propose a principled approach to design fair learning methods under this framework, exemplify four methods and show they consistently outperform their existing counterparts in both fairness and accuracy across three real-world data sets. We theoretically analyze the framework, and prove it can learn models with high fairness or high accuracy, with their trade-offs balanced by a threshold variable.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.414 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.920 Zit.
Deep Learning with Differential Privacy
2016 · 5.649 Zit.
Federated Machine Learning
2019 · 5.624 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.600 Zit.