Refining Low-Resource Unsupervised Translation by Language Disentanglement of Multilingual Model

Published in 36th Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, USA, 2022

Recommended citation: Xuan-Phi Nguyen, Shafiq Joty, Wu Kui & Aw Ai Ti (2022). Refining Low-Resource Unsupervised Translation by Language Disentanglement of Multilingual Model. 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Paper Link: https://arxiv.org/abs/2205.15544

Abstract

Numerous recent work on unsupervised machine translation (UMT) implies that competent unsupervised translations of low-resource and unrelated languages, such as Nepali or Sinhala, are only possible if the model is trained in a massive multilingual environment, where these low-resource languages are mixed with high-resource counterparts. Nonetheless, while the high-resource languages greatly help kick-start the target low-resource translation tasks, the language discrepancy between them may hinder their further improvement. In this work, we propose a simple refinement procedure to separate languages from a pre-trained multilingual UMT model for it to focus on only the target low-resource task. Our method achieves the state of the art in the fully unsupervised translation tasks of English to Nepali, Sinhala, Gujarati, Latvian, Estonian and Kazakh, with BLEU score gains of 3.5, 3.5, 3.3, 4.1, 4.2, and 3.3, respectively. Our codebase is available at github.com/nxphi47/refine_unsup_multilingual_mt .