Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2509.24663
Cited By
InfLLM-V2: Dense-Sparse Switchable Attention for Seamless Short-to-Long Adaptation
29 September 2025
Weilin Zhao
Z. Zhou
Zhou Su
Chaojun Xiao
Yuxuan Li
Yanghao Li
Yudi Zhang
Weilun Zhao
Ruoyao Xiao
Yuxiang Huang
Ao Sun
Xu Han
Zhiyuan Liu
VLM
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (12 upvotes)
Github (226★)
Papers citing
"InfLLM-V2: Dense-Sparse Switchable Attention for Seamless Short-to-Long Adaptation"
4 / 4 papers shown
SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature Space
Zhenyi Shen
Junru Lu
Lin Gui
Jiazheng Li
Yulan He
D. Yin
Xing Sun
319
0
0
25 Nov 2025
Alleviating Forgetfulness of Linear Attention by Hybrid Sparse Attention and Contextualized Learnable Token Eviction
Mutian He
Philip N. Garner
CLL
258
0
0
23 Oct 2025
NOSA: Native and Offloadable Sparse Attention
Yuxiang Huang
Chaojun Xiao
Xu Han
Zhiyuan Liu
MQ
174
0
0
15 Oct 2025
VideoNSA: Native Sparse Attention Scales Video Understanding
Enxin Song
Wenhao Chai
Shusheng Yang
Ethan Armand
Xiaojun Shan
Haiyang Xu
Jianwen Xie
Zhuowen Tu
136
3
0
02 Oct 2025
1