View publication

Multilingual Machine Translation promises to improve translation quality between non-English languages. This is advantageous for several reasons, namely lower latency (no need to translate twice), and reduced error cascades (e.g. , avoiding losing gender and formality information when translating through English). On the downside, adding more languages reduces model capacity per language, which is usually countered by increasing the overall model size, making training harder and inference slower. In this work, we introduce Language-Specific Transformer Layers (LSLs), which allow us to increase model capacity, while keeping the amount of computation and the number of parameters used in the forward pass constant. The key idea is to have some layers of the encoder be source or target language-specific, while keeping the remaining layers shared. We study the best way to place these layers using a neural architecture search inspired approach, and achieve an improvement of 1.31.3 chrFchrF (1.51.5 spBLEUspBLEU) points over not using LSLs on a separate decoder architecture, and 1.91.9 chrFchrF (2.22.2 spBLEUspBLEU) on a shared decoder one.

Related readings and updates.

Efficient Inference For Neural Machine Translation

Large transformer models have achieved state-of-the-art results in neural machine translation and have become standard in the field. In this work, we look for the optimal combination of known techniques to optimize inference speed without sacrificing translation quality. We conduct an empirical study that stacks various approaches and demonstrates that combination of replacing decoder self-attention with simplified recurrent units, adopting a…
See paper details

ACL 2020

Apple sponsored the 58th Annual Meeting of the Association for Computational Linguistics (ACL) from July 5 - 10. ACL is the premier conference of the field of computational linguistics, covering a broad spectrum of research areas regarding computational approaches to natural language.

See event details