Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From Black Box to Transparency: the hidden costs of XAI in NGN
0
Zitationen
6
Autoren
2023
Jahr
Abstract
Abstract As the 5G era progresses and the research community shifts its focus to the future 6G era, an unprecedented surge in the adoption of Artificial Intelligence (AI) techniques for network development and operation is expected. AI is envisioned to play a crucial role in 6G networks, enabling intelligent network management, enhanced user experience, higher security, and unprecedented levels of connectivity. However, the opaque nature of Machine Learning (ML) models has prompted a shift towards Explainable AI (XAI) techniques to enhance decision-making transparency and auditability. Despite the promises of XAI, computational costs remain a critical consideration. This study investigates the temporal and energy costs associated with four prominent XAI techniques: SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), Permutation Importance (PI), and Morris Sensitivity (MS). These techniques are applied to four ML models in two distinct 5G network scenarios. Our results show that MS emerged as the most time-efficient and energy-conserving XAI method, demonstrating consistent feature relevance across various ML models and datasets, affirming its efficacy in explaining model decisions.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.