ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.17328
  4. Cited By
Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention
  Graph in Pre-Trained Transformers

Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers

27 May 2023
Hongjie Wang
Bhishma Dedhia
N. Jha
    ViT
    VLM
ArXivPDFHTML

Papers citing "Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers"

20 / 20 papers shown
Title
Efficient Token Compression for Vision Transformer with Spatial Information Preserved
Efficient Token Compression for Vision Transformer with Spatial Information Preserved
Junzhu Mao
Yang Shen
Jinyang Guo
Yazhou Yao
Xiansheng Hua
ViT
26
0
0
30 Mar 2025
Similarity-Aware Token Pruning: Your VLM but Faster
Ahmadreza Jeddi
Negin Baghbanzadeh
Elham Dolatabadi
Babak Taati
3DV
VLM
50
1
0
14 Mar 2025
Accelerate 3D Object Detection Models via Zero-Shot Attention Key Pruning
Lizhen Xu
Xiuxiu Bai
Xiaojun Jia
Jianwu Fang
Shanmin Pang
56
0
0
13 Mar 2025
Multi-Cue Adaptive Visual Token Pruning for Large Vision-Language Models
Bozhi Luan
Wengang Zhou
Hao Feng
Zhe Wang
Xiaosong Li
H. Li
VLM
61
0
0
11 Mar 2025
Enhancing Layer Attention Efficiency through Pruning Redundant Retrievals
Enhancing Layer Attention Efficiency through Pruning Redundant Retrievals
Hanze Li
Xiande Huang
34
0
0
09 Mar 2025
AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and
  Pruning
AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning
Yiwu Zhong
Zhuoming Liu
Yin Li
Liwei Wang
79
1
0
04 Dec 2024
Redundant Queries in DETR-Based 3D Detection Methods: Unnecessary and
  Prunable
Redundant Queries in DETR-Based 3D Detection Methods: Unnecessary and Prunable
Lizhen Xu
Shanmin Pang
Wenzhao Qiu
Zehao Wu
Xiuxiu Bai
K. Mei
Jianru Xue
69
1
0
03 Dec 2024
Training Noise Token Pruning
Training Noise Token Pruning
Mingxing Rao
Bohan Jiang
Daniel Moyer
ViT
59
0
0
27 Nov 2024
Is Less More? Exploring Token Condensation as Training-free Test-time Adaptation
Is Less More? Exploring Token Condensation as Training-free Test-time Adaptation
Zixin Wang
Dong Gong
Sen Wang
Zi Huang
Yadan Luo
VLM
22
0
0
16 Oct 2024
Fit and Prune: Fast and Training-free Visual Token Pruning for
  Multi-modal Large Language Models
Fit and Prune: Fast and Training-free Visual Token Pruning for Multi-modal Large Language Models
Weihao Ye
Qiong Wu
Wenhao Lin
Yiyi Zhou
VLM
9
10
0
16 Sep 2024
PRANCE: Joint Token-Optimization and Structural Channel-Pruning for
  Adaptive ViT Inference
PRANCE: Joint Token-Optimization and Structural Channel-Pruning for Adaptive ViT Inference
Ye Li
Chen Tang
Yuan Meng
Jiajun Fan
Zenghao Chai
Xinzhu Ma
Zhi Wang
Wenwu Zhu
21
1
0
06 Jul 2024
Pruning One More Token is Enough: Leveraging Latency-Workload
  Non-Linearities for Vision Transformers on the Edge
Pruning One More Token is Enough: Leveraging Latency-Workload Non-Linearities for Vision Transformers on the Edge
Nick Eliopoulos
Purvish Jajal
James Davis
Gaowen Liu
George K. Thiravathukal
Yung-Hsiang Lu
23
1
0
01 Jul 2024
CItruS: Chunked Instruction-aware State Eviction for Long Sequence
  Modeling
CItruS: Chunked Instruction-aware State Eviction for Long Sequence Modeling
Yu Bai
Xiyuan Zou
Heyan Huang
Sanxing Chen
Marc-Antoine Rondeau
Yang Gao
Jackie Chi Kit Cheung
14
3
0
17 Jun 2024
ALGM: Adaptive Local-then-Global Token Merging for Efficient Semantic
  Segmentation with Plain Vision Transformers
ALGM: Adaptive Local-then-Global Token Merging for Efficient Semantic Segmentation with Plain Vision Transformers
Narges Norouzi
Svetlana Orlova
Daan de Geus
Gijs Dubbelman
ViT
FedML
28
3
0
14 Jun 2024
Accelerating Transformers with Spectrum-Preserving Token Merging
Accelerating Transformers with Spectrum-Preserving Token Merging
Hoai-Chau Tran
D. M. Nguyen
Duy M. Nguyen
Trung Thanh Nguyen
Ngan Le
Pengtao Xie
Daniel Sonntag
James Y. Zou
Binh T. Nguyen
Mathias Niepert
16
8
0
25 May 2024
Attention-Driven Training-Free Efficiency Enhancement of Diffusion
  Models
Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Hongjie Wang
Difan Liu
Yan Kang
Yijun Li
Zhe Lin
N. Jha
Yuchen Liu
19
12
0
08 May 2024
Harnessing Attention Mechanisms: Efficient Sequence Reduction using
  Attention-based Autoencoders
Harnessing Attention Mechanisms: Efficient Sequence Reduction using Attention-based Autoencoders
Daniel Biermann
Fabrizio Palumbo
Morten Goodwin
Ole-Christoffer Granmo
17
0
0
23 Oct 2023
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully
  Exploiting Self-Attention
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Xiangcheng Liu
Tianyi Wu
Guodong Guo
ViT
31
26
0
28 Sep 2022
Training Vision Transformers with Only 2040 Images
Training Vision Transformers with Only 2040 Images
Yunhao Cao
Hao Yu
Jianxin Wu
ViT
90
42
0
26 Jan 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
255
7,337
0
11 Nov 2021
1