Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing
0
Zitationen
4
Autoren
2023
Jahr
Abstract
Randomized smoothing is the state-of-the-art approach to construct image classifiers that are provably robust against additive adversarial perturbations of bounded magnitude. However, it is more complicated to construct reasonable certificates against semantic transformation (e.g., image blurring, translation, gamma correction) and their compositions. In this work, we propose \emph{General Lipschitz (GL),} a new framework to certify neural networks against composable resolvable semantic perturbations. Within the framework, we analyze transformation-dependent Lipschitz-continuity of smoothed classifiers w.r.t. transformation parameters and derive corresponding robustness certificates. Our method performs comparably to state-of-the-art approaches on the ImageNet dataset.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.488 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.648 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.547 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.380 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.587 Zit.