View publication

Large transformer models have achieved state-of-the-art results in neural machine translation and have become standard in the field. In this work, we look for the optimal combination of known techniques to optimize inference speed without sacrificing translation quality. We conduct an empirical study that stacks various approaches and demonstrates that combination of replacing decoder self-attention with simplified recurrent units, adopting a deep encoder and a shallow decoder architecture and multi-head attention pruning can achieve up to 109 percent and 84 percent speedup on CPU and GPU respectively and reduce the number of parameters by 25 percent while maintaining the same translation quality in terms of BLEU.

Related readings and updates.

Improving How Machine Translations Handle Grammatical Gender Ambiguity

Machine Translation (MT) enables people to connect with others and engage with content across language barriers. Grammatical gender presents a difficult challenge for these systems, as some languages require specificity for terms that can be ambiguous or neutral in other languages. For example, when translating the English word "nurse" into Spanish, one must decide whether the feminine "enfermera" or the masculine "enfermero" is appropriate…
See highlight details

ACL 2020

Apple sponsored the 58th Annual Meeting of the Association for Computational Linguistics (ACL) from July 5 - 10. ACL is the premier conference of the field of computational linguistics, covering a broad spectrum of research areas regarding computational approaches to natural language.

See event details