Multiple References with Meaningful Variations Improve Literary Machine Translation
While a source sentence can be translated in many ways, most machine translation (MT) models are trained with only a single reference. Previous work has shown that using synthetic paraphrases can improve MT. This paper investigates best practices for employing multiple references by analyzing the semantic similarity among different English translations of world literature in the Par3 dataset. We classify the semantic similarity between paraphrases into three levels: low, medium, and high, and fine-tune three different models (mT5-large, LLaMA-2-7B, and Opus-MT) for literary MT tasks. Across different models, holding the total training instances constant, single-reference but more source texts only marginally outperforms multiple-reference with half of the source texts. Moreover, when fine-tuning an LLM, using paraphrases with medium and high semantic similarity outperforms an unfiltered dataset, with improvements in BLEU (0.3-0.5), COMET (0.1-0.9), and chrF++ (0.17-0.32). Our code is publicly available on GitHub.
View on arXiv