ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.01777
  4. Cited By
SEA: Sparse Linear Attention with Estimated Attention Mask

SEA: Sparse Linear Attention with Estimated Attention Mask

3 October 2023
Heejun Lee
Jina Kim
Jeffrey Willette
Sung Ju Hwang
ArXivPDFHTML

Papers citing "SEA: Sparse Linear Attention with Estimated Attention Mask"

6 / 6 papers shown
Title
Fused3S: Fast Sparse Attention on Tensor Cores
Fused3S: Fast Sparse Attention on Tensor Cores
Zitong Li
Aparna Chandramowlishwaran
GNN
40
0
0
12 May 2025
InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU
InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU
Heejun Lee
G. Park
Jaduk Suh
Sung Ju Hwang
82
1
0
13 Feb 2025
Breaking the Attention Bottleneck
Breaking the Attention Bottleneck
Kalle Hilsenbek
81
0
0
16 Jun 2024
When Linear Attention Meets Autoregressive Decoding: Towards More
  Effective and Efficient Linearized Large Language Models
When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models
Haoran You
Yichao Fu
Zheng Wang
Amir Yazdanbakhsh
Yingyan Celine Lin
31
1
0
11 Jun 2024
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
249
2,009
0
28 Jul 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1