ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.16201
  4. Cited By
SpecVLM: Enhancing Speculative Decoding of Video LLMs via Verifier-Guided Token Pruning
v1v2 (latest)

SpecVLM: Enhancing Speculative Decoding of Video LLMs via Verifier-Guided Token Pruning

22 August 2025
Yicheng Ji
Jun Zhang
Heming Xia
Jinpeng Chen
Lidan Shou
Gang Chen
Huan Li
    VLM
ArXiv (abs)PDFHTMLGithub (13★)

Papers citing "SpecVLM: Enhancing Speculative Decoding of Video LLMs via Verifier-Guided Token Pruning"

2 / 2 papers shown
Title
SpecVLM: Fast Speculative Decoding in Vision-Language Models
SpecVLM: Fast Speculative Decoding in Vision-Language Models
Haiduo Huang
Fuwei Yang
Zhenhua Liu
Xuanwu Yin
Dong Li
Pengju Ren
E. Barsoum
MLLMVLM
177
0
0
15 Sep 2025
Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference
Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference
Yuan Feng
Junlin Lv
Yukun Cao
Xike Xie
S. K. Zhou
VLM
489
82
0
16 Jul 2024
1