Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2508.16201
Cited By
v1
v2 (latest)
SpecVLM: Enhancing Speculative Decoding of Video LLMs via Verifier-Guided Token Pruning
22 August 2025
Yicheng Ji
Jun Zhang
Heming Xia
Jinpeng Chen
Lidan Shou
Gang Chen
Huan Li
VLM
Re-assign community
ArXiv (abs)
PDF
HTML
Github (13★)
Papers citing
"SpecVLM: Enhancing Speculative Decoding of Video LLMs via Verifier-Guided Token Pruning"
2 / 2 papers shown
Title
SpecVLM: Fast Speculative Decoding in Vision-Language Models
Haiduo Huang
Fuwei Yang
Zhenhua Liu
Xuanwu Yin
Dong Li
Pengju Ren
E. Barsoum
MLLM
VLM
177
0
0
15 Sep 2025
Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference
Yuan Feng
Junlin Lv
Yukun Cao
Xike Xie
S. K. Zhou
VLM
489
82
0
16 Jul 2024
1