Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.08584
Cited By
ZipVL: Efficient Large Vision-Language Models with Dynamic Token Sparsification and KV Cache Compression
11 October 2024
Yefei He
Feng Chen
Jing Liu
Wenqi Shao
Hong Zhou
K. Zhang
Bohan Zhuang
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ZipVL: Efficient Large Vision-Language Models with Dynamic Token Sparsification and KV Cache Compression"
8 / 8 papers shown
Title
Static or Dynamic: Towards Query-Adaptive Token Selection for Video Question Answering
Yumeng Shi
Quanyu Long
Wenya Wang
64
0
0
30 Apr 2025
ZipR1: Reinforcing Token Sparsity in MLLMs
Feng Chen
Yefei He
Lequan Lin
J. Liu
Bohan Zhuang
Qi Wu
46
0
0
23 Apr 2025
MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention
Yucheng Li
Huiqiang Jiang
Chengruidong Zhang
Qianhui Wu
Xufang Luo
...
Amir H. Abdi
Dongsheng Li
Jianfeng Gao
Y. Yang
Lili Qiu
31
1
0
22 Apr 2025
Efficient LLaMA-3.2-Vision by Trimming Cross-attended Visual Features
Jewon Lee
Ki-Ung Song
Seungmin Yang
Donguk Lim
Jaeyeon Kim
Wooksu Shin
Bo-Kyeong Kim
Yong Jae Lee
Tae-Ho Kim
VLM
51
0
0
01 Apr 2025
AirCache: Activating Inter-modal Relevancy KV Cache Compression for Efficient Large Vision-Language Model Inference
Kai Huang
Hao Zou
Bochen Wang
Ye Xi
Zhen Xie
Hao Wang
VLM
42
0
0
31 Mar 2025
Beyond Intermediate States: Explaining Visual Redundancy through Language
Dingchen Yang
Bowen Cao
Anran Zhang
Weibo Gu
Winston Hu
Guang Chen
VLM
79
0
0
26 Mar 2025
AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding
Xiao Wang
Qingyi Si
Jianlong Wu
Shiyu Zhu
Li Cao
Liqiang Nie
VLM
78
2
0
16 Mar 2025
Beyond Token Compression: A Training-Free Reduction Framework for Efficient Visual Processing in MLLMs
Hongliang Li
Jiaxin Zhang
Wenhui Liao
Dezhi Peng
Kai Ding
Lianwen Jin
OffRL
MQ
71
0
0
31 Jan 2025
1