56
2

R1-T1: Fully Incentivizing Translation Capability in LLMs via Reasoning Learning

Abstract

Despite recent breakthroughs in reasoning-enhanced large language models (LLMs) like DeepSeek-R1, incorporating inference-time reasoning into machine translation (MT), where human translators naturally employ structured, multi-layered reasoning chain-of-thoughts (CoTs), is yet underexplored. Existing methods either design a fixed CoT tailored for a specific MT sub-task (e.g., literature translation), or rely on synthesizing CoTs unaligned with humans, limiting their adaptability to diverse translation scenarios. This paper introduces R1-Translator (R1-T1), a novel framework to achieve inference-time reasoning for general MT via reinforcement learning (RL) with human-aligned CoTs comprising six common patterns. Our approach pioneers three innovations: (1) extending reasoning-based translation beyond MT sub-tasks to six languages and diverse tasks (e.g., legal/medical domain adaptation, idiom resolution); (2) formalizing six expert-curated CoT templates that mirror hybrid human strategies like context-aware paraphrasing and back translation; and (3) enabling self-evolving CoT discovery through RL. Experimental results indicate a steady translation performance improvement in 11 languages and 40 translation directions on Flores-101 test set, especially on the languages unseen from training.

View on arXiv
@article{he2025_2502.19735,
  title={ R1-T1: Fully Incentivizing Translation Capability in LLMs via Reasoning Learning },
  author={ Minggui He and Yilun Liu and Shimin Tao and Yuanchang Luo and Hongyong Zeng and Chang Su and Li Zhang and Hongxia Ma and Daimeng Wei and Weibin Meng and Hao Yang and Boxing Chen and Osamu Yoshie },
  journal={arXiv preprint arXiv:2502.19735},
  year={ 2025 }
}
Comments on this paper