Exploiting multilingualism through multistage fine-tuning for low-resource neural machine translation

R Dabre, A Fujita, C Chu - … of the 2019 Conference on Empirical …, 2019 - aclanthology.org
Proceedings of the 2019 Conference on Empirical Methods in Natural …, 2019aclanthology.org
This paper highlights the impressive utility of multi-parallel corpora for transfer learning in a
one-to-many low-resource neural machine translation (NMT) setting. We report on a
systematic comparison of multistage fine-tuning configurations, consisting of (1) pre-training
on an external large (209k–440k) parallel corpus for English and a helping target
language,(2) mixed pre-training or fine-tuning on a mixture of the external and low-resource
(18k) target parallel corpora, and (3) pure fine-tuning on the target parallel corpora. Our …
Abstract
This paper highlights the impressive utility of multi-parallel corpora for transfer learning in a one-to-many low-resource neural machine translation (NMT) setting. We report on a systematic comparison of multistage fine-tuning configurations, consisting of (1) pre-training on an external large (209k–440k) parallel corpus for English and a helping target language,(2) mixed pre-training or fine-tuning on a mixture of the external and low-resource (18k) target parallel corpora, and (3) pure fine-tuning on the target parallel corpora. Our experiments confirm that multi-parallel corpora are extremely useful despite their scarcity and content-wise redundancy thus exhibiting the true power of multilingualism. Even when the helping target language is not one of the target languages of our concern, our multistage fine-tuning can give 3–9 BLEU score gains over a simple one-to-one model.
aclanthology.org
Showing the best result for this search. See all results