Compositional Neural Network Language Models for Agglutinative Languages
Loading...
Date
2016
Authors
Arısoy, Ebru
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Continuous space language models (CSLMs) have been proven to be successful in speech recognition. With proper training of the word embeddings, words that are semantically or syntactically related are expected to be mapped to nearby locations in the continuous space. In agglutinative languages, words are made up of concatenation of stems and suffixes and, as a result, compositional modeling is important. However, when trained on word tokens, CSLMs do not explicitly consider this structure. In this paper, we explore compositional modeling of stems and suffixes in a long short-term memory neural network language model. Our proposed models jointly learn distributed representations for stems and endings (concatenation of suffixes) and predict the probability for stem and ending sequences. Experiments on the Turkish Broadcast news transcription task show that further gains on top of a state-of-theart stem-ending-based n-gram language model can be obtained with the proposed models.
Description
Ebru Arısoy (MEF Author)
ORCID
Keywords
Agglutinative languages, Sub-word-based language modeling, Long short-term memory, Language modeling, Author information
Turkish CoHE Thesis Center URL
Citation
Arisoy, E., Saraclar, M., Compositional Neural Network Language Models for Agglutinative Languages. p. 3494-3498.
WoS Q
N/A
Scopus Q
N/A
Source
Conference: 17th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2016) Location: San Francisco, CA Date: SEP 08-12, 2016
Volume
Issue
Start Page
3494
End Page
3498