Bidirectional Recurrent Neural Network Language Models for Automatic Speech Recognition
No Thumbnail Available
Date
2015
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Recurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts.
Description
##nofulltext##
Ebru Arısoy (MEF Author)
Ebru Arısoy (MEF Author)
Keywords
Long short term memory, Bidirectional neural networks, Language modeling, Recurrent neural networks
Turkish CoHE Thesis Center URL
Citation
Arisoy, E., Sethy, A., Ramabhadran, B., Chen, S., (APR 19-24, 2015 ). Bidirectional recurrent neural network language models for automatic speech recognition. 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Location: Brisbane, AUSTRALIA. 5421-5425.
WoS Q
N/A
Scopus Q
N/A
Source
Conference: 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Location: Brisbane, AUSTRALIA Date: APR 19-24, 2015
Volume
Issue
Start Page
5421
End Page
5425