OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 08:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

2015·18.473 Zitationen
Volltext beim Verlag öffnen

18.473

Zitationen

4

Autoren

2015

Jahr

Abstract

Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1%, [26]) on this dataset.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Advanced Neural Network ApplicationsDomain Adaptation and Few-Shot LearningAdversarial Robustness in Machine Learning
Volltext beim Verlag öffnen