ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09570
29
1

LLMs Can Achieve High-quality Simultaneous Machine Translation as Efficiently as Offline

13 April 2025
Biao Fu
Minpeng Liao
Kai Fan
Chengxi Li
L. Zhang
Yidong Chen
Xiaodong Shi
    OffRL
ArXivPDFHTML
Abstract

When the complete source sentence is provided, Large Language Models (LLMs) perform excellently in offline machine translation even with a simple prompt "Translate the following sentence from [src lang] into [tgt lang]:". However, in many real scenarios, the source tokens arrive in a streaming manner and simultaneous machine translation (SiMT) is required, then the efficiency and performance of decoder-only LLMs are significantly limited by their auto-regressive nature. To enable LLMs to achieve high-quality SiMT as efficiently as offline translation, we propose a novel paradigm that includes constructing supervised fine-tuning (SFT) data for SiMT, along with new training and inference strategies. To replicate the token input/output stream in SiMT, the source and target tokens are rearranged into an interleaved sequence, separated by special tokens according to varying latency requirements. This enables powerful LLMs to learn read and write operations adaptively, based on varying latency prompts, while still maintaining efficient auto-regressive decoding. Experimental results show that, even with limited SFT data, our approach achieves state-of-the-art performance across various SiMT benchmarks, and preserves the original abilities of offline translation. Moreover, our approach generalizes well to document-level SiMT setting without requiring specific fine-tuning, even beyond the offline translation model.

View on arXiv
@article{fu2025_2504.09570,
  title={ LLMs Can Achieve High-quality Simultaneous Machine Translation as Efficiently as Offline },
  author={ Biao Fu and Minpeng Liao and Kai Fan and Chengxi Li and Liang Zhang and Yidong Chen and Xiaodong Shi },
  journal={arXiv preprint arXiv:2504.09570},
  year={ 2025 }
}
Comments on this paper