Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Privacy-Preserving NILM: A Self-Alignment Source-Aware Domain Adaptation Approach
23
Zitationen
3
Autoren
2025
Jahr
Abstract
Nonintrusive load monitoring (NILM) identifies individual appliance power usage within an overall power load, enabling more refined and secure load management. However, existing deep learning-based NILM models require large amounts of labeled data from diverse devices, which is time-consuming and raises privacy concerns. In addition, handling these large datasets demands significant computational and memory resources. To address these issues, we propose a self-alignment, source-aware domain adaptation approach. Our method employs domain adversarial networks to address feature and label distribution shifts between source and target domains. To preserve privacy, we fine-tune the model without source domain data. To stabilize adversarial training, we incorporate a self-alignment mechanism (SAM). The SAM ensures parameter updates without accessing source domain data, enabling stable training while preserving privacy. Confidence-based label density maps (LDMs) generate pseudo-labels for fine-tuning. We validated our approach with intradomain and interdomain adaptability studies on synthetic and real data. We conducted intradomain and interdomain adaptability studies on synthetic and real data. Results show our method achieves decomposition accuracy superior to source-based methods for devices with regular usage patterns, all while effectively preserving privacy by eliminating the need for source data during the fine-tuning stage. This offers the potential for improving NILM efficiency and energy management in industrial measurement settings with similar stability requirements.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.441 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.947 Zit.
Deep Learning with Differential Privacy
2016 · 5.708 Zit.
Federated Machine Learning
2019 · 5.679 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.604 Zit.