Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
LSTM: A Search Space Odyssey
6.725
Zitationen
5
Autoren
2016
Jahr
Abstract
Several variants of the long short-term memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful functional ANalysis Of VAriance framework. In total, we summarize the results of 5400 experimental runs ( ≈ 15 years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.
Ähnliche Arbeiten
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
2014 · 10.764 Zit.
Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups
2012 · 10.244 Zit.
Speech recognition with deep recurrent neural networks
2013 · 8.784 Zit.
Librispeech: An ASR corpus based on public domain audio books
2015 · 5.832 Zit.
Evaluating collaborative filtering recommender systems
2004 · 5.738 Zit.