Exploring Retraining-free Speech Recognition for Intra-sentential Code-switching
In collaboration with Georgia Institute of Technology
AuthorsZhen Huang, Xiaodan Zhuang, Daben Liu, Xiaoqiang Xiao, Yuchen Zhang, Sabato Marco Siniscalchi
In collaboration with Georgia Institute of Technology
AuthorsZhen Huang, Xiaodan Zhuang, Daben Liu, Xiaoqiang Xiao, Yuchen Zhang, Sabato Marco Siniscalchi
Code Switching refers to the phenomenon of changing languages within a sentence or discourse, and it represents a challenge for conventional automatic speech recognition systems deployed to tackle a single target language. The code switching problem is complicated by the lack of multi-lingual training data needed to build new and ad hoc multi-lingual acoustic and language models. In this work, we present a prototype research code-switching speech recognition system, currently not in production, that leverages existing monolingual acoustic and language models, i.e., no ad hoc training is needed. To generate high quality pronunciation of foreign language words in the native language phoneme set, we use a combination of existing acoustic phone decoders and an LSTM-based grapheme-to-phoneme model. In addition, a code-switching language model was developed by using translated word pairs to borrow statistics from the native language model. We demonstrate that our approach handles accented foreign pronunciations better than techniques based on human labeling. Our best system reduces the WER from 34.4%, obtained with a conventional monolingual speech recognition system, to 15.3% on an intra-sentential code-switching task, without harming the monolingual accuracy.