ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.10443
  4. Cited By
Simultaneous Masking, Not Prompting Optimization: A Paradigm Shift in
  Fine-tuning LLMs for Simultaneous Translation

Simultaneous Masking, Not Prompting Optimization: A Paradigm Shift in Fine-tuning LLMs for Simultaneous Translation

16 May 2024
Matthew Raffel
Victor Agostinelli
Lizhong Chen
ArXivPDFHTML

Papers citing "Simultaneous Masking, Not Prompting Optimization: A Paradigm Shift in Fine-tuning LLMs for Simultaneous Translation"

7 / 7 papers shown
Title
LLMs Can Achieve High-quality Simultaneous Machine Translation as Efficiently as Offline
LLMs Can Achieve High-quality Simultaneous Machine Translation as Efficiently as Offline
Biao Fu
Minpeng Liao
Kai Fan
Chengxi Li
L. Zhang
Yidong Chen
Xiaodong Shi
OffRL
61
1
0
13 Apr 2025
InfiniSST: Simultaneous Translation of Unbounded Speech with Large Language Model
Siqi Ouyang
Xi Xu
Lei Li
48
1
0
04 Mar 2025
What Are They Filtering Out? A Survey of Filtering Strategies for Harm Reduction in Pretraining Datasets
Marco Antonio Stranisci
Christian Hardmeier
50
0
0
17 Feb 2025
FASST: Fast LLM-based Simultaneous Speech Translation
FASST: Fast LLM-based Simultaneous Speech Translation
Siqi Ouyang
Xi Xu
Chinmay Dandekar
Lei Li
23
3
0
18 Aug 2024
SiLLM: Large Language Models for Simultaneous Machine Translation
SiLLM: Large Language Models for Simultaneous Machine Translation
Shoutao Guo
Shaolei Zhang
Zhengrui Ma
Min Zhang
Yang Feng
LRM
35
9
0
20 Feb 2024
The Falcon Series of Open Language Models
The Falcon Series of Open Language Models
Ebtesam Almazrouei
Hamza Alobeidli
Abdulaziz Alshamsi
Alessandro Cappelli
Ruxandra-Aimée Cojocaru
...
Quentin Malartic
Daniele Mazzotta
Badreddine Noune
B. Pannier
Guilherme Penedo
AI4TS
ALM
113
389
0
28 Nov 2023
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
242
690
0
27 Aug 2021
1