OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.04.2026, 05:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evasion Attacks in Continual Learning

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Continual learning (CL) enables machine learning models to adapt to evolving tasks while addressing challenges such as catastrophic forgetting. However, it inherits vulnerabilities from conventional settings, notably evasion attacks where adversarial perturbations degrade model performance. This study investigates the impact of evasion attacks, specifically Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), in a class incremental continual learning scenario using the CIFAR-10 dataset. Results show that adversarial examples generated during training largely retain their effectiveness across CL steps, demonstrating transferability over time. Their success varies depending on the similarity between newly introduced and previously learned classes, sometimes increasing or decreasing accordingly. Adversarial training, adapted for the CL setting, is also evaluated. While it improves robustness against specific attacks (mean gain ~30%), it introduces trade-offs such as reduced accuracy on benign inputs and potential overfitting to adversarial examples. These findings highlight the challenge of balancing robustness, generalization, and efficiency, and emphasize the importance of understanding how adversarial examples transfer across tasks in continual learning.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Domain Adaptation and Few-Shot LearningAdversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen