Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Delving into adversarial attacks on deep policies
24
Zitationen
2
Autoren
2017
Jahr
Abstract
Adversarial examples have been shown to exist for a variety of deep learning architectures. Deep reinforcement learning has shown promising results on training agent policies directly on raw inputs such as image pixels. In this paper we present a novel study into adversarial attacks on deep reinforcement learning polices. We compare the effectiveness of the attacks using adversarial examples vs. random noise. We present a novel method for reducing the number of times adversarial examples need to be injected for a successful attack, based on the value function. We further explore how re-training on random noise and FGSM perturbations affects the resilience against adversarial examples.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.316 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.385 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.292 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.257 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.488 Zit.