Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI Approaches in Federated Learning: Systematic Review (Preprint)
0
Zitationen
2
Autoren
2024
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial intelligence (AI) has, in the recent past, experienced a rebirth with the growth of generative AI systems such as ChatGPT and Bard. These systems are trained with billions of parameters and have enabled widespread accessibility and understanding of AI among different user groups. Widespread adoption of AI has led to the need for understanding how machine learning (ML) models operate to build trust in them. An understanding of how these models generate their results remains a huge challenge that explainable AI seeks to solve. Federated learning (FL) grew out of the need to have privacy-preserving AI by having ML models that are decentralized but still share model parameters with a global model. </sec> <sec> <title>OBJECTIVE</title> This study sought to examine the extent of development of the explainable AI field within the FL environment in relation to the main contributions made, the types of FL, the sectors it is applied to, the models used, the methods applied by each study, and the databases from which sources are obtained. </sec> <sec> <title>METHODS</title> A systematic search in 8 electronic databases, namely, Web of Science Core Collection, Scopus, PubMed, ACM Digital Library, IEEE Xplore, Mendeley, BASE, and Google Scholar, was undertaken. </sec> <sec> <title>RESULTS</title> A review of 26 studies revealed that research on explainable FL is steadily growing despite being concentrated in Europe and Asia. The key determinants of FL use were data privacy and limited training data. Horizontal FL remains the preferred approach for federated ML, whereas post hoc explainability techniques were preferred. </sec> <sec> <title>CONCLUSIONS</title> There is potential for development of novel approaches and improvement of existing approaches in the explainable FL field, especially for critical areas. </sec> <sec> <title>CLINICALTRIAL</title> OSF Registries 10.17605/OSF.IO/Y85WA; https://osf.io/y85wa </sec>
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.401 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.885 Zit.
Deep Learning with Differential Privacy
2016 · 5.610 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.593 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.570 Zit.