Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Trust and Explainable Federated Deep Learning Framework in Zero Touch B5G Networks
21
Zitationen
3
Autoren
2022
Jahr
Abstract
The emergent Zero touch Service and Management (ZSM) paradigm aims to automate the orchestration and management of running network slices, in Beyond 5G networks (B5G), with an unprecedented level of scalability. To achieve this vision, ZSM calls for a large usage of advanced deep learning algorithms, in order to dynamically build efficient decisions. In this context, Federated deep Learning (FL) proved their efficiency in not only building collaborative deep learning models, among several network slices, but also ensuring the privacy and isolation of such network slices. Indeed, FL-based solutions give “machine-centric” decisions about running network slices and their performance, which will be then executed/applied by managers, i.e., slice manager staff/module. However, FL-enabled solutions do not provide any details about why and how such decisions were made, and thus such decisions cannot be properly trusted/understood by slice managers. To alleviate this issue, we leverage eXplainable Artificial Intelligence (XAI) paradigm that aims to improve the transparency of black-box FL decision-making process. In particular, XAI helps to explain the FL-based decisions to make them interpretable/trustable by network slices managers. In this paper, we design a novel XAI-powered framework to explain FL-based decisions. We first build a deep learning model in federated way, to predict key performance indicators (KPI) of network slices. Our FL-based KPI prediction is useful for the configuration and the management of network slice lifecycle, especially for the Service Level Agreement (SLA) violation and the network slice re-configuration. Then, we develop several XAI models on the top of our FL-based model, such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), RuleFit, and Partial Dependence Plot (PDP), to enhance the level of trust, credibility (of the local data/model), transparency, and explanation of the FL-based decisions, while adhering the data privacy, to different B5G network stakeholders, such as slice managers. Experiments results show the efficiency of our XAI-powered framework, to explain FL-based decisions related to latency KPI predictions.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.400 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.884 Zit.
Deep Learning with Differential Privacy
2016 · 5.608 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.592 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.570 Zit.