OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 01:05

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Interactive Analysis of CNN Robustness

2021·0 Zitationen·Computer Graphics ForumOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2021

Jahr

Abstract

Abstract While convolutional neural networks (CNNs) have found wide adoption as state‐of‐the‐art models for image‐related tasks, their predictions are often highly sensitive to small input perturbations, which the human vision is robust against. This paper presents Perturber, a web‐based application that allows users to instantaneously explore how CNN activations and predictions evolve when a 3D input scene is interactively perturbed. Perturber offers a large variety of scene modifications, such as camera controls, lighting and shading effects, background modifications, object morphing, as well as adversarial attacks, to facilitate the discovery of potential vulnerabilities. Fine‐tuned model versions can be directly compared for qualitative evaluation of their robustness. Case studies with machine learning experts have shown that Perturber helps users to quickly generate hypotheses about model vulnerabilities and to qualitatively compare model behavior. Using quantitative analyses, we could replicate users’ insights with other CNN architectures and input images, yielding new insights about the vulnerability of adversarially trained models.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningAnomaly Detection Techniques and ApplicationsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen