Bidirectional Recurrent Neural Network Language Models for Automatic Speech Recognition

dc.contributor.author Chen, Stanley
dc.contributor.author Sethy, Abhinav
dc.contributor.author Ramabhadran, Bhuvana
dc.contributor.author Arısoy, Ebru
dc.date.accessioned 2019-02-28T13:04:26Z
dc.date.accessioned 2019-02-28T11:08:19Z
dc.date.available 2019-02-28T13:04:26Z
dc.date.available 2019-02-28T11:08:19Z
dc.date.issued 2015
dc.department Mühendislik Fakültesi, Elektrik Elektronik Mühendisliği Bölümü en_US
dc.description ##nofulltext## en_US
dc.description Ebru Arısoy (MEF Author) en_US
dc.description.WoSDocumentType Proceedings Paper
dc.description.WoSIndexDate 2015 en_US
dc.description.WoSPublishedMonth Nisan en_US
dc.description.WoSYOKperiod YÖK - 2014-15 en_US
dc.description.abstract Recurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts. en_US
dc.description.woscitationindex Conference Proceedings Citation Index - Science en_US
dc.identifier.citation Arisoy, E., Sethy, A., Ramabhadran, B., Chen, S., (APR 19-24, 2015 ). Bidirectional recurrent neural network language models for automatic speech recognition. 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Location: Brisbane, AUSTRALIA. 5421-5425. en_US
dc.identifier.endpage 5425 en_US
dc.identifier.issn 1520-6149
dc.identifier.scopusquality N/A
dc.identifier.startpage 5421 en_US
dc.identifier.uri https://hdl.handle.net/20.500.11779/705
dc.identifier.wos WOS:000427402905108
dc.identifier.wosquality N/A
dc.institutionauthor Arısoy, Ebru
dc.language.iso en en_US
dc.relation.ispartof Conference: 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Location: Brisbane, AUSTRALIA Date: APR 19-24, 2015 en_US
dc.relation.publicationcategory Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı en_US
dc.rights info:eu-repo/semantics/closedAccess en_US
dc.subject Long short term memory en_US
dc.subject Bidirectional neural networks en_US
dc.subject Language modeling en_US
dc.subject Recurrent neural networks en_US
dc.title Bidirectional Recurrent Neural Network Language Models for Automatic Speech Recognition en_US
dc.type Conference Object en_US

Files

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description: