ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.07154
  4. Cited By
MiniViT: Compressing Vision Transformers with Weight Multiplexing

MiniViT: Compressing Vision Transformers with Weight Multiplexing

14 April 2022
Jinnian Zhang
Houwen Peng
Kan Wu
Mengchen Liu
Bin Xiao
Jianlong Fu
Lu Yuan
    ViT
ArXivPDFHTML

Papers citing "MiniViT: Compressing Vision Transformers with Weight Multiplexing"

50 / 69 papers shown
Title
LightNobel: Improving Sequence Length Limitation in Protein Structure Prediction Model via Adaptive Activation Quantization
LightNobel: Improving Sequence Length Limitation in Protein Structure Prediction Model via Adaptive Activation Quantization
Seunghee Han
S. Choi
J. Kim
21
0
0
09 May 2025
Pyramid-based Mamba Multi-class Unsupervised Anomaly Detection
Pyramid-based Mamba Multi-class Unsupervised Anomaly Detection
Nasar Iqbal
Niki Martinel
Mamba
48
0
0
04 Apr 2025
KernelDNA: Dynamic Kernel Sharing via Decoupled Naive Adapters
KernelDNA: Dynamic Kernel Sharing via Decoupled Naive Adapters
Haiduo Huang
Yadong Zhang
Pengju Ren
49
0
0
30 Mar 2025
Similarity-Guided Layer-Adaptive Vision Transformer for UAV Tracking
Chaocan Xue
Bineng Zhong
Qihua Liang
Yaozong Zheng
Ning Li
Yuanliang Xue
Shuxiang Song
36
0
0
09 Mar 2025
Not Every Patch is Needed: Towards a More Efficient and Effective Backbone for Video-based Person Re-identification
Not Every Patch is Needed: Towards a More Efficient and Effective Backbone for Video-based Person Re-identification
Lanyun Zhu
T. Chen
Deyi Ji
Jieping Ye
J. Liu
36
2
0
28 Jan 2025
Mix-QViT: Mixed-Precision Vision Transformer Quantization Driven by Layer Importance and Quantization Sensitivity
Mix-QViT: Mixed-Precision Vision Transformer Quantization Driven by Layer Importance and Quantization Sensitivity
Navin Ranjan
Andreas E. Savakis
MQ
38
1
0
10 Jan 2025
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers
Yunshan Zhong
Yuyao Zhou
Yuxin Zhang
Shen Li
Yong Li
Fei Chao
Zhanpeng Zeng
Rongrong Ji
MQ
91
0
0
31 Dec 2024
Learning an Adaptive and View-Invariant Vision Transformer for Real-Time UAV Tracking
Learning an Adaptive and View-Invariant Vision Transformer for Real-Time UAV Tracking
You Wu
Yongxin Li
Mengyuan Liu
Xucheng Wang
Xiangyang Yang
Hengzhou Ye
Dan Zeng
Qijun Zhao
Shuiwang Li
82
0
0
28 Dec 2024
Gap Preserving Distillation by Building Bidirectional Mappings with A
  Dynamic Teacher
Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher
Yong Guo
Shulian Zhang
Haolin Pan
Jing Liu
Yulun Zhang
Jian Chen
33
0
0
05 Oct 2024
FINE: Factorizing Knowledge for Initialization of Variable-sized
  Diffusion Models
FINE: Factorizing Knowledge for Initialization of Variable-sized Diffusion Models
Yucheng Xie
Fu Feng
Ruixiao Shi
Jing Wang
Xin Geng
AI4CE
34
2
0
28 Sep 2024
General Compression Framework for Efficient Transformer Object Tracking
General Compression Framework for Efficient Transformer Object Tracking
Lingyi Hong
Jinglun Li
Xinyu Zhou
Shilin Yan
Pinxue Guo
...
Zhaoyu Chen
Shuyong Gao
Wei Zhang
Hong Lu
Wenqiang Zhang
ViT
32
0
0
26 Sep 2024
TReX- Reusing Vision Transformer's Attention for Efficient Xbar-based
  Computing
TReX- Reusing Vision Transformer's Attention for Efficient Xbar-based Computing
Abhishek Moitra
Abhiroop Bhattacharjee
Youngeun Kim
Priyadarshini Panda
ViT
24
2
0
22 Aug 2024
From Efficient Multimodal Models to World Models: A Survey
From Efficient Multimodal Models to World Models: A Survey
Xinji Mai
Zeng Tao
Junxiong Lin
Haoran Wang
Yang Chang
Yanlan Kang
Yan Wang
Wenqiang Zhang
32
5
0
27 Jun 2024
Adaptively Bypassing Vision Transformer Blocks for Efficient Visual
  Tracking
Adaptively Bypassing Vision Transformer Blocks for Efficient Visual Tracking
Xiangyang Yang
Dan Zeng
Xucheng Wang
You Wu
Hengzhou Ye
Qijun Zhao
Shuiwang Li
53
3
0
12 Jun 2024
Efficient Multimodal Large Language Models: A Survey
Efficient Multimodal Large Language Models: A Survey
Yizhang Jin
Jian Li
Yexin Liu
Tianjun Gu
Kai Wu
...
Xin Tan
Zhenye Gan
Yabiao Wang
Chengjie Wang
Lizhuang Ma
LRM
39
45
0
17 May 2024
Exploring Learngene via Stage-wise Weight Sharing for Initializing
  Variable-sized Models
Exploring Learngene via Stage-wise Weight Sharing for Initializing Variable-sized Models
Shiyu Xia
Wenxuan Zhu
Xu Yang
Xin Geng
26
1
0
25 Apr 2024
Data-independent Module-aware Pruning for Hierarchical Vision
  Transformers
Data-independent Module-aware Pruning for Hierarchical Vision Transformers
Yang He
Joey Tianyi Zhou
ViT
40
3
0
21 Apr 2024
Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision
  Transformers
Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision Transformers
Diana-Nicoleta Grigore
Mariana-Iuliana Georgescu
J. A. Justo
T. Johansen
Andreea-Iuliana Ionescu
Radu Tudor Ionescu
24
0
0
14 Apr 2024
Dense Vision Transformer Compression with Few Samples
Dense Vision Transformer Compression with Few Samples
Hanxiao Zhang
Yifan Zhou
Guo-Hua Wang
Jianxin Wu
ViT
VLM
31
1
0
27 Mar 2024
GRITv2: Efficient and Light-weight Social Relation Recognition
GRITv2: Efficient and Light-weight Social Relation Recognition
Sagar Reddy
Neeraj Kasera
Avinash Thakur
ViT
20
0
0
11 Mar 2024
On the Convergence of Differentially-Private Fine-tuning: To Linearly
  Probe or to Fully Fine-tune?
On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune?
Shuqi Ke
Charlie Hou
Giulia Fanti
Sewoong Oh
34
4
0
29 Feb 2024
Understanding Neural Network Binarization with Forward and Backward
  Proximal Quantizers
Understanding Neural Network Binarization with Forward and Backward Proximal Quantizers
Yiwei Lu
Yaoliang Yu
Xinlin Li
Vahid Partovi Nia
MQ
30
3
0
27 Feb 2024
EffLoc: Lightweight Vision Transformer for Efficient 6-DOF Camera
  Relocalization
EffLoc: Lightweight Vision Transformer for Efficient 6-DOF Camera Relocalization
Zhendong Xiao
Changhao Chen
Shan Yang
Wu Wei
25
1
0
21 Feb 2024
Head-wise Shareable Attention for Large Language Models
Head-wise Shareable Attention for Large Language Models
Zouying Cao
Yifei Yang
Hai Zhao
36
3
0
19 Feb 2024
LRP-QViT: Mixed-Precision Vision Transformer Quantization via Layer-wise
  Relevance Propagation
LRP-QViT: Mixed-Precision Vision Transformer Quantization via Layer-wise Relevance Propagation
Navin Ranjan
Andreas E. Savakis
MQ
19
6
0
20 Jan 2024
Group Multi-View Transformer for 3D Shape Analysis with Spatial Encoding
Group Multi-View Transformer for 3D Shape Analysis with Spatial Encoding
Lixiang Xu
Qingzhe Cui
Richang Hong
Wei Xu
Enhong Chen
Xin Yuan
Chenglong Li
Y. Tang
23
0
0
27 Dec 2023
Transformer as Linear Expansion of Learngene
Transformer as Linear Expansion of Learngene
Shiyu Xia
Miaosen Zhang
Xu Yang
Ruiming Chen
Haokun Chen
Xin Geng
35
6
0
09 Dec 2023
PhytNet -- Tailored Convolutional Neural Networks for Custom Botanical
  Data
PhytNet -- Tailored Convolutional Neural Networks for Custom Botanical Data
Jamie R. Sykes
Katherine Denby
Daniel W. Franks
8
1
0
20 Nov 2023
I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of
  Post-Training ViTs Quantization
I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization
Yunshan Zhong
Jiawei Hu
Mingbao Lin
Mengzhao Chen
Rongrong Ji
MQ
28
3
0
16 Nov 2023
Lightweight Full-Convolutional Siamese Tracker
Lightweight Full-Convolutional Siamese Tracker
Yunfeng Li
Bo Wang
Xueyi Wu
Zhuoyan Liu
Ye Li
25
9
0
09 Oct 2023
CAIT: Triple-Win Compression towards High Accuracy, Fast Inference, and
  Favorable Transferability For ViTs
CAIT: Triple-Win Compression towards High Accuracy, Fast Inference, and Favorable Transferability For ViTs
Ao Wang
Hui Chen
Zijia Lin
Sicheng Zhao
J. Han
Guiguang Ding
ViT
24
6
0
27 Sep 2023
A survey on efficient vision transformers: algorithms, techniques, and
  performance benchmarking
A survey on efficient vision transformers: algorithms, techniques, and performance benchmarking
Lorenzo Papa
Paolo Russo
Irene Amerini
Luping Zhou
25
41
0
05 Sep 2023
Revisiting Vision Transformer from the View of Path Ensemble
Revisiting Vision Transformer from the View of Path Ensemble
Shuning Chang
Pichao Wang
Haowen Luo
Fan Wang
Mike Zheng Shou
ViT
27
3
0
12 Aug 2023
A Good Student is Cooperative and Reliable: CNN-Transformer
  Collaborative Learning for Semantic Segmentation
A Good Student is Cooperative and Reliable: CNN-Transformer Collaborative Learning for Semantic Segmentation
Jinjing Zhu
Yuan Luo
Xueye Zheng
Hao Wang
Lin Wang
14
33
0
24 Jul 2023
Revisiting Token Pruning for Object Detection and Instance Segmentation
Revisiting Token Pruning for Object Detection and Instance Segmentation
Yifei Liu
Mathias Gehrig
Nico Messikommer
Marco Cannici
Davide Scaramuzza
ViT
VLM
34
24
0
12 Jun 2023
MixFormerV2: Efficient Fully Transformer Tracking
MixFormerV2: Efficient Fully Transformer Tracking
Yutao Cui
Tian-Shu Song
Gangshan Wu
Liming Wang
21
53
0
25 May 2023
Fast-StrucTexT: An Efficient Hourglass Transformer with Modality-guided
  Dynamic Token Merge for Document Understanding
Fast-StrucTexT: An Efficient Hourglass Transformer with Modality-guided Dynamic Token Merge for Document Understanding
Mingliang Zhai
Yulin Li
Xiameng Qin
Chen Yi
Qunyi Xie
Chengquan Zhang
Kun Yao
Yuwei Wu
Yunde Jia
8
8
0
19 May 2023
Boost Vision Transformer with GPU-Friendly Sparsity and Quantization
Boost Vision Transformer with GPU-Friendly Sparsity and Quantization
Chong Yu
Tao Chen
Zhongxue Gan
Jiayuan Fan
MQ
ViT
25
21
0
18 May 2023
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group
  Attention
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention
Xinyu Liu
Houwen Peng
Ningxin Zheng
Yuqing Yang
Han Hu
Yixuan Yuan
ViT
20
273
0
11 May 2023
Patch-wise Mixed-Precision Quantization of Vision Transformer
Patch-wise Mixed-Precision Quantization of Vision Transformer
Junrui Xiao
Zhikai Li
Lianwei Yang
Qingyi Gu
MQ
22
12
0
11 May 2023
RIFormer: Keep Your Vision Backbone Effective While Removing Token Mixer
RIFormer: Keep Your Vision Backbone Effective While Removing Token Mixer
Jiahao Wang
Songyang Zhang
Yong Liu
Taiqiang Wu
Yujiu Yang
Xihui Liu
Kai-xiang Chen
Ping Luo
Dahua Lin
16
20
0
12 Apr 2023
Sim-T: Simplify the Transformer Network by Multiplexing Technique for
  Speech Recognition
Sim-T: Simplify the Transformer Network by Multiplexing Technique for Speech Recognition
Guangyong Wei
Zhikui Duan
Shiren Li
Guangguang Yang
Xinmei Yu
Junhua Li
8
4
0
11 Apr 2023
Towards Efficient Task-Driven Model Reprogramming with Foundation Models
Towards Efficient Task-Driven Model Reprogramming with Foundation Models
Shoukai Xu
Jiangchao Yao
Ran Luo
Shuhai Zhang
Zihao Lian
Mingkui Tan
Bo Han
Yaowei Wang
19
6
0
05 Apr 2023
Scaling Pre-trained Language Models to Deeper via Parameter-efficient
  Architecture
Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
Peiyu Liu
Ze-Feng Gao
Yushuo Chen
Wayne Xin Zhao
Ji-Rong Wen
MoE
22
0
0
27 Mar 2023
Frame Flexible Network
Frame Flexible Network
Yitian Zhang
Yue Bai
Chang Liu
Huan Wang
Sheng R. Li
Yun Fu
11
4
0
26 Mar 2023
X-Pruner: eXplainable Pruning for Vision Transformers
X-Pruner: eXplainable Pruning for Vision Transformers
Lu Yu
Wei Xiang
ViT
9
48
0
08 Mar 2023
Rotation Invariant Quantization for Model Compression
Rotation Invariant Quantization for Model Compression
Dor-Joseph Kampeas
Yury Nahshan
Hanoch Kremer
Gil Lederman
Shira Zaloshinski
Zheng Li
E. Haleva
MQ
8
0
0
03 Mar 2023
A Comprehensive Review and a Taxonomy of Edge Machine Learning:
  Requirements, Paradigms, and Techniques
A Comprehensive Review and a Taxonomy of Edge Machine Learning: Requirements, Paradigms, and Techniques
Wenbin Li
Hakim Hacid
Ebtesam Almazrouei
Merouane Debbah
30
13
0
16 Feb 2023
Knowledge Distillation in Vision Transformers: A Critical Review
Knowledge Distillation in Vision Transformers: A Critical Review
Gousia Habib
Tausifa Jan Saleem
Brejesh Lall
16
15
0
04 Feb 2023
RepQ-ViT: Scale Reparameterization for Post-Training Quantization of
  Vision Transformers
RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers
Zhikai Li
Junrui Xiao
Lianwei Yang
Qingyi Gu
MQ
17
80
0
16 Dec 2022
12
Next