ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.02249
27
3

Non-autoregressive Sequence-to-Sequence Vision-Language Models

4 March 2024
Kunyu Shi
Qi Dong
Luis Goncalves
Zhuowen Tu
Stefano Soatto
    VLM
ArXivPDFHTML
Abstract

Sequence-to-sequence vision-language models are showing promise, but their applicability is limited by their inference latency due to their autoregressive way of generating predictions. We propose a parallel decoding sequence-to-sequence vision-language model, trained with a Query-CTC loss, that marginalizes over multiple inference paths in the decoder. This allows us to model the joint distribution of tokens, rather than restricting to conditional distribution as in an autoregressive model. The resulting model, NARVL, achieves performance on-par with its state-of-the-art autoregressive counterpart, but is faster at inference time, reducing from the linear complexity associated with the sequential generation of tokens to a paradigm of constant time joint inference.

View on arXiv
@article{shi2025_2403.02249,
  title={ Non-autoregressive Sequence-to-Sequence Vision-Language Models },
  author={ Kunyu Shi and Qi Dong and Luis Goncalves and Zhuowen Tu and Stefano Soatto },
  journal={arXiv preprint arXiv:2403.02249},
  year={ 2025 }
}
Comments on this paper