Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
SGBoost<sup>+</sup>: Efficient and Privacy-Preserving Vertical Boosting Trees for Federated Outsourced Training and Inference
1
Zitationen
8
Autoren
2025
Jahr
Abstract
Vertical federated learning for boosting trees has gained significant attention due to its ability to enable participants to collaboratively train high-quality models while preserving data privacy. However, existing privacy-preserving vertical boosting tree schemes suffer from high computation and communication costs or potential security vulnerabilities. Recently, SGBoost, a federated outsourced training and inference scheme, was proposed to address these challenges. However, its performance and security still require significant improvements. Therefore, we propose SGBoost<sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">+</sup>, an efficient and privacy-preserving vertical boosting tree framework for federated outsourced training and inference. Building upon the strengths of SGBoost, we introduce an RLWE-based lossless and secure internal node construction and an efficient oblivious inference algorithm to finish the model training and inference, significantly enhancing both security and efficiency. To reduce communication cost, we design a ciphertext compression algorithm for model training, which drastically minimizes data transmission costs. Additionally, we analyze the security of a symmetric encryption scheme, specify the required security conditions and parameters, and optimize our model inference based on its improved and secure version. Detailed security analysis confirms that SGBoost<sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">+</sup> offers strong privacy guarantees. Extensive experiments demonstrate that SGBoost<sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">+</sup> achieves efficient model training and inference with significantly lower computation and communication costs compared to state-of-the-art schemes.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.396 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.876 Zit.
Deep Learning with Differential Privacy
2016 · 5.601 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.592 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.567 Zit.