Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
To be forgotten or to be fair: unveiling fairness implications of machine unlearning methods
9
Zitationen
9
Autoren
2024
Jahr
Abstract
Abstract The right to be forgotten (RTBF) allows individuals to request the removal of personal information from online platforms. Researchers have proposed machine unlearning algorithms as a solution for erasing specific data from trained models to support RTBF. However, these methods modify how data are fed into the model and how training is done, which may subsequently compromise AI ethics from the fairness perspective. To help AI practitioners make responsible decisions when adopting these unlearning methods, we present the first study on machine unlearning methods to reveal their fairness implications. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) as baseline using three fairness datasets under three different deletion strategies. Results show that non-uniform data deletion with the variant of SISA leads to better fairness compared to ORTR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. This research can help practitioners make informed decisions when implementing RTBF solutions that consider potential trade-offs on fairness.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.397 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.878 Zit.
Deep Learning with Differential Privacy
2016 · 5.604 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.592 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.569 Zit.