ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.24663
  4. Cited By
InfLLM-V2: Dense-Sparse Switchable Attention for Seamless Short-to-Long Adaptation

InfLLM-V2: Dense-Sparse Switchable Attention for Seamless Short-to-Long Adaptation

29 September 2025
Weilin Zhao
Z. Zhou
Zhou Su
Chaojun Xiao
Yuxuan Li
Yanghao Li
Yudi Zhang
Weilun Zhao
Ruoyao Xiao
Yuxiang Huang
Ao Sun
Xu Han
Zhiyuan Liu
    VLM
ArXiv (abs)PDFHTMLHuggingFace (12 upvotes)Github (226★)

Papers citing "InfLLM-V2: Dense-Sparse Switchable Attention for Seamless Short-to-Long Adaptation"

4 / 4 papers shown
SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature Space
SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature Space
Zhenyi Shen
Junru Lu
Lin Gui
Jiazheng Li
Yulan He
D. Yin
Xing Sun
319
0
0
25 Nov 2025
Alleviating Forgetfulness of Linear Attention by Hybrid Sparse Attention and Contextualized Learnable Token Eviction
Alleviating Forgetfulness of Linear Attention by Hybrid Sparse Attention and Contextualized Learnable Token Eviction
Mutian He
Philip N. Garner
CLL
258
0
0
23 Oct 2025
NOSA: Native and Offloadable Sparse Attention
NOSA: Native and Offloadable Sparse Attention
Yuxiang Huang
Chaojun Xiao
Xu Han
Zhiyuan Liu
MQ
174
0
0
15 Oct 2025
VideoNSA: Native Sparse Attention Scales Video Understanding
VideoNSA: Native Sparse Attention Scales Video Understanding
Enxin Song
Wenhao Chai
Shusheng Yang
Ethan Armand
Xiaojun Shan
Haiyang Xu
Jianwen Xie
Zhuowen Tu
136
3
0
02 Oct 2025
1