Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Neural gradient boosting in federated learning for hemodynamic instability prediction: towards a distributed and scalable deep learning-based solution.
1
Zitationen
6
Autoren
2022
Jahr
Abstract
Federated learning (FL) is a privacy preserving approach to learning that overcome issues related to data access, privacy, and security, which represent key challenges in the healthcare sector. FL enables hospitals to collaboratively learn a shared prediction model without moving the data outside their secure infrastructure. To do so, after having sent model updates to a central server, an update aggregation is performed, and the model is sent back to the sites for further training. Although widely applied on neural networks, the deployment of FL architectures is lacking scalability and support for machine learning techniques such as decision tree-based models. The latter, when embedded in FL, suffer from costly encryption techniques applied for sharing sensitive information such as the splitting decisions within the trees. In this work, we focus on predicting hemodynamic instability on ICU patients by enabling distributed gradient boosting in FL. We employ a clinical dataset from 25 hospitals generated based on the Philips eICU database and we design a FL pipeline that supports neural-based boosting models as well as conventional neural networks. This enhancement enables decision tree models in FL, which represent the state-of-the-art approach for classification tasks involving tabular clinical data. Comparable performances in terms of accuracy, precision, recall and F1 score have been reached when detecting hemodynamic instability in FL, and in a centralized setup. In summary, we demonstrate the feasibility of a scalable FL for detecting hemodynamic instability in ICU data, which preserves privacy and holds the deployment benefits of a neural-based architecture.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.397 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.878 Zit.
Deep Learning with Differential Privacy
2016 · 5.604 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.592 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.569 Zit.