Exploring Continuous Integrate-and-Fire for Efficient and Adaptive
Simultaneous Speech Translation
Simultaneous speech translation (SimulST) is a challenging task that aims to directly translate streaming speech before the complete input is observed. A SimulST system generally includes two important components: the pre-decision that aggregates the speech information, and the policy that decides read or write. While recent works had proposed a variety of strategies to improve the pre-decision, they mostly adopt the fixed wait-k policy. The adaptive policies are rarely explored. We propose to model the adaptive policy using the Continuous Integrate-and-Fire (CIF). In our proposed model, the CIF is not only responsible for aggregating speech information, but also deciding when to read or write. To adapt the CIF to SimulST task, we propose two modifications: a token-level quantity loss or an infinite lookback attention. We show that our model can learn an adaptive policy effectively, achieving comparable or superior performance to MMA at lower latency, while being more efficient to train.
View on arXiv