ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.07091
  4. Cited By
BiViT: Extremely Compressed Binary Vision Transformer

BiViT: Extremely Compressed Binary Vision Transformer

14 November 2022
Yefei He
Zhenyu Lou
Luoming Zhang
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
    ViT
    MQ
ArXivPDFHTML

Papers citing "BiViT: Extremely Compressed Binary Vision Transformer"

21 / 21 papers shown
Title
APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers
APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers
Zhuguanyu Wu
Jiayi Zhang
Jiaxin Chen
Jinyang Guo
Di Huang
Yunhong Wang
MQ
45
0
0
03 Apr 2025
Binarized Mamba-Transformer for Lightweight Quad Bayer HybridEVS Demosaicing
Binarized Mamba-Transformer for Lightweight Quad Bayer HybridEVS Demosaicing
Shiyang Zhou
Haijin Zeng
Yunfan Lu
Tong Shao
Ke Tang
Yongyong Chen
Jie Liu
Jingyong Su
Mamba
63
0
0
20 Mar 2025
ViT-1.58b: Mobile Vision Transformers in the 1-bit Era
ViT-1.58b: Mobile Vision Transformers in the 1-bit Era
Zhengqing Yuan
Rong-Er Zhou
Hongyi Wang
Lifang He
Yanfang Ye
Lichao Sun
MQ
14
8
0
26 Jun 2024
USM RNN-T model weights binarization
USM RNN-T model weights binarization
Oleg Rybakov
Dmitriy Serdyuk
Chengjian Zheng
MQ
19
0
0
05 Jun 2024
Scalable MatMul-free Language Modeling
Scalable MatMul-free Language Modeling
Rui-Jie Zhu
Yu Zhang
Ethan Sifferman
Tyler Sheaves
Yiqiao Wang
Dustin Richmond
P. Zhou
Jason Eshraghian
26
17
0
04 Jun 2024
Efficient Multimodal Large Language Models: A Survey
Efficient Multimodal Large Language Models: A Survey
Yizhang Jin
Jian Li
Yexin Liu
Tianjun Gu
Kai Wu
...
Xin Tan
Zhenye Gan
Yabiao Wang
Chengjie Wang
Lizhuang Ma
LRM
39
44
0
17 May 2024
Model Quantization and Hardware Acceleration for Vision Transformers: A
  Comprehensive Survey
Model Quantization and Hardware Acceleration for Vision Transformers: A Comprehensive Survey
Dayou Du
Gu Gong
Xiaowen Chu
MQ
32
5
0
01 May 2024
A General and Efficient Training for Transformer via Token Expansion
A General and Efficient Training for Transformer via Token Expansion
Wenxuan Huang
Yunhang Shen
Jiao Xie
Baochang Zhang
Gaoqi He
Ke Li
Xing Sun
Shaohui Lin
38
2
0
31 Mar 2024
Understanding Neural Network Binarization with Forward and Backward
  Proximal Quantizers
Understanding Neural Network Binarization with Forward and Backward Proximal Quantizers
Yiwei Lu
Yaoliang Yu
Xinlin Li
Vahid Partovi Nia
MQ
22
3
0
27 Feb 2024
A Comprehensive Survey of Convolutions in Deep Learning: Applications,
  Challenges, and Future Trends
A Comprehensive Survey of Convolutions in Deep Learning: Applications, Challenges, and Future Trends
Abolfazl Younesi
Mohsen Ansari
Mohammadamin Fazli
A. Ejlali
Muhammad Shafique
Joerg Henkel
3DV
33
43
0
23 Feb 2024
Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM
Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM
Luoming Zhang
Wen Fei
Weijia Wu
Yefei He
Zhenyu Lou
Hong Zhou
MQ
11
5
0
07 Oct 2023
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit
  Diffusion Models
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models
Yefei He
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
DiffM
MQ
8
45
0
05 Oct 2023
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient
  Vision Transformer
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Haoran You
Huihong Shi
Yipin Guo
Yingyan Lin
Lin
24
16
0
10 Jun 2023
GSB: Group Superposition Binarization for Vision Transformer with
  Limited Training Samples
GSB: Group Superposition Binarization for Vision Transformer with Limited Training Samples
T. Gao
Chengzhong Xu
Le Zhang
Hui Kong
22
4
0
13 May 2023
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
44
94
0
04 Jul 2022
BiT: Robustly Binarized Multi-distilled Transformer
BiT: Robustly Binarized Multi-distilled Transformer
Zechun Liu
Barlas Oğuz
Aasish Pappu
Lin Xiao
Scott Yih
Meng Li
Raghuraman Krishnamoorthi
Yashar Mehdad
MQ
35
50
0
25 May 2022
Binarizing by Classification: Is soft function really necessary?
Binarizing by Classification: Is soft function really necessary?
Yefei He
Luoming Zhang
Weijia Wu
Hong Zhou
MQ
18
2
0
16 May 2022
CMT: Convolutional Neural Networks Meet Vision Transformers
CMT: Convolutional Neural Networks Meet Vision Transformers
Jianyuan Guo
Kai Han
Han Wu
Yehui Tang
Chunjing Xu
Yunhe Wang
Chang Xu
ViT
328
500
0
13 Jul 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
BinaryBERT: Pushing the Limit of BERT Quantization
BinaryBERT: Pushing the Limit of BERT Quantization
Haoli Bai
Wei Zhang
Lu Hou
Lifeng Shang
Jing Jin
Xin Jiang
Qun Liu
Michael Lyu
Irwin King
MQ
138
221
0
31 Dec 2020
Forward and Backward Information Retention for Accurate Binary Neural
  Networks
Forward and Backward Information Retention for Accurate Binary Neural Networks
Haotong Qin
Ruihao Gong
Xianglong Liu
Mingzhu Shen
Ziran Wei
F. Yu
Jingkuan Song
MQ
117
321
0
24 Sep 2019
1