OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 25.04.2026, 02:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Defending Against Membership Inference Attacks on Iteratively Pruned Deep Neural Networks

2025·2 ZitationenOpen Access
Volltext beim Verlag öffnen

2

Zitationen

7

Autoren

2025

Jahr

Abstract

Model pruning is a technique for compressing deep learning models, and using an iterative way to prune the model can achieve better compression effects with lower utility loss.However, our analysis reveals that iterative pruning significantly increases model memorization, making the pruned models more vulnerable to membership inference attacks (MIAs).Unfortunately, the vast majority of existing defenses against MIAs are designed for original and unpruned models.In this paper, we propose a new framework WEMEM to weaken memorization in the iterative pruning process.Specifically, our analysis identifies two important factors that increase memorization in iterative pruning, namely data reuse and inherent memorability.We consider the individual and combined impacts of both factors, forming three scenarios that lead to increased memorization in iteratively pruned models.We design three defense primitives based on these factors' characteristics.By combining these primitives, we propose methods tailored to each scenario to weaken memorization effectively.Comprehensive experiments under ten adaptive MIAs demonstrate the effectiveness of the proposed defenses.Moreover, our defenses outperform five existing defenses in terms of privacy-utility tradeoff and efficiency.Additionally, we enhance the proposed defenses to automatically adjust settings for optimal defense, improving their practicability.

Ähnliche Arbeiten

Autoren

Themen

Adversarial Robustness in Machine LearningPrivacy-Preserving Technologies in DataArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen