ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.08453
  4. Cited By
Improving Autoregressive NLP Tasks via Modular Linearized Attention

Improving Autoregressive NLP Tasks via Modular Linearized Attention

17 April 2023
Victor Agostinelli
Lizhong Chen
ArXivPDFHTML

Papers citing "Improving Autoregressive NLP Tasks via Modular Linearized Attention"

3 / 3 papers shown
Title
LeaPformer: Enabling Linear Transformers for Autoregressive and
  Simultaneous Tasks via Learned Proportions
LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions
Victor Agostinelli
Sanghyun Hong
Lizhong Chen
KELM
35
1
0
18 May 2024
fairseq S^2: A Scalable and Integrable Speech Synthesis Toolkit
fairseq S^2: A Scalable and Integrable Speech Synthesis Toolkit
Changhan Wang
Wei-Ning Hsu
Yossi Adi
Adam Polyak
Ann Lee
Peng-Jen Chen
Jiatao Gu
J. Pino
VLM
67
32
0
14 Sep 2021
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
251
2,009
0
28 Jul 2020
1