Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Privacy-Preserving Deep Reinforcement Learning Based On Adaptive Noise For Sepsis Treatment
1
Zitationen
4
Autoren
2023
Jahr
Abstract
In recent years, deep reinforcement learning has made amazing achievements in the field of personalized treatment plan recommendation, especially in sepsis treatment. But deep reinforcement learning has privacy leakage problem. Differential privacy is a reliable privacy-preserving technique that has been proven to be effective in protecting the privacy of machine learning from adversary attacks on sensitive data. However, existing methods add the same amount of noise in the model gradient, which dramatically reduces the usability of the model. In this paper, we propose a privacy-preserving deep reinforcement learning based on relevance analysis, that is used add noise adaptively. This approach aims to better balance the usability and privacy preservation of reinforcement learning models for sepsis treatment. Our experimental results shows that this mechanism can achieve higher model usability with the same privacy-preserving strength.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.395 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.872 Zit.
Deep Learning with Differential Privacy
2016 · 5.595 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.591 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.564 Zit.