ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.13144
  4. Cited By
Additive Powers-of-Two Quantization: An Efficient Non-uniform
  Discretization for Neural Networks

Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks

28 September 2019
Yuhang Li
Xin Dong
Wei Wang
    MQ
ArXivPDFHTML

Papers citing "Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks"

41 / 41 papers shown
Title
Breaking the Limits of Quantization-Aware Defenses: QADT-R for Robustness Against Patch-Based Adversarial Attacks in QNNs
Amira Guesmi
B. Ouni
Muhammad Shafique
MQ
AAML
36
0
0
10 Mar 2025
Towards Accurate Binary Spiking Neural Networks: Learning with Adaptive Gradient Modulation Mechanism
Towards Accurate Binary Spiking Neural Networks: Learning with Adaptive Gradient Modulation Mechanism
Yu Liang
Wenjie Wei
A. Belatreche
Honglin Cao
Zijian Zhou
Shuai Wang
Malu Zhang
Y. Yang
MQ
63
0
0
21 Feb 2025
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers
Yunshan Zhong
Yuyao Zhou
Yuxin Zhang
Shen Li
Yong Li
Fei Chao
Zhanpeng Zeng
Rongrong Ji
MQ
94
0
0
31 Dec 2024
Exploring the Robustness and Transferability of Patch-Based Adversarial Attacks in Quantized Neural Networks
Exploring the Robustness and Transferability of Patch-Based Adversarial Attacks in Quantized Neural Networks
Amira Guesmi
B. Ouni
Muhammad Shafique
AAML
74
0
0
22 Nov 2024
M$^2$-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed
  Quantization
M2^22-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization
Yanbiao Liang
Huihong Shi
Zhongfeng Wang
MQ
21
0
0
10 Oct 2024
Low-Energy Line Codes for On-Chip Networks
Low-Energy Line Codes for On-Chip Networks
Beyza Dabak
Major Glenn
Jingyang Liu
Alexander Buck
Siyi Yang
R. Calderbank
Natalie Enright Jerger
Daniel J. Sorin
19
0
0
23 May 2024
Learning from Students: Applying t-Distributions to Explore Accurate and
  Efficient Formats for LLMs
Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
Jordan Dotzel
Yuzong Chen
Bahaa Kotb
Sushma Prasad
Gang Wu
Sheng R. Li
Mohamed S. Abdelfattah
Zhiru Zhang
26
8
0
06 May 2024
Torch2Chip: An End-to-end Customizable Deep Neural Network Compression
  and Deployment Toolkit for Prototype Hardware Accelerator Design
Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator Design
Jian Meng
Yuan Liao
Anupreetham Anupreetham
Ahmed Hassan
Shixing Yu
Han-Sok Suh
Xiaofeng Hu
Jae-sun Seo
MQ
49
1
0
02 May 2024
Quantized Feature Distillation for Network Quantization
Quantized Feature Distillation for Network Quantization
Kevin Zhu
Yin He
Jianxin Wu
MQ
26
9
0
20 Jul 2023
InfLoR-SNN: Reducing Information Loss for Spiking Neural Networks
InfLoR-SNN: Reducing Information Loss for Spiking Neural Networks
Yu-Zhu Guo
Y. Chen
Liwen Zhang
Xiaode Liu
Xinyi Tong
Yuanyuan Ou
Xuhui Huang
Zhe Ma
AAML
39
3
0
10 Jul 2023
Hard Sample Matters a Lot in Zero-Shot Quantization
Hard Sample Matters a Lot in Zero-Shot Quantization
Huantong Li
Xiangmiao Wu
Fanbing Lv
Daihai Liao
Thomas H. Li
Yonggang Zhang
Bo Han
Mingkui Tan
MQ
24
20
0
24 Mar 2023
Oscillation-free Quantization for Low-bit Vision Transformers
Oscillation-free Quantization for Low-bit Vision Transformers
Shi Liu
Zechun Liu
Kwang-Ting Cheng
MQ
15
34
0
04 Feb 2023
BiBench: Benchmarking and Analyzing Network Binarization
BiBench: Benchmarking and Analyzing Network Binarization
Haotong Qin
Mingyuan Zhang
Yifu Ding
Aoyu Li
Zhongang Cai
Ziwei Liu
F. I. F. Richard Yu
Xianglong Liu
MQ
AAML
29
36
0
26 Jan 2023
ACQ: Improving Generative Data-free Quantization Via Attention
  Correction
ACQ: Improving Generative Data-free Quantization Via Attention Correction
Jixing Li
Xiaozhou Guo
Benzhe Dai
Guoliang Gong
Min Jin
Gang Chen
Wenyu Mao
Huaxiang Lu
MQ
30
4
0
18 Jan 2023
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Dan Liu
X. Chen
Chen-li Ma
Xue Liu
MQ
24
3
0
24 Dec 2022
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware
  Training
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training
Yunshan Zhong
Gongrui Nan
Yu-xin Zhang
Fei Chao
Rongrong Ji
MQ
18
3
0
12 Nov 2022
AskewSGD : An Annealed interval-constrained Optimisation method to train
  Quantized Neural Networks
AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks
Louis Leconte
S. Schechtman
Eric Moulines
29
4
0
07 Nov 2022
Deep learning model compression using network sensitivity and gradients
Deep learning model compression using network sensitivity and gradients
M. Sakthi
N. Yadla
Raj Pawate
19
2
0
11 Oct 2022
Energy Efficient Hardware Acceleration of Neural Networks with
  Power-of-Two Quantisation
Energy Efficient Hardware Acceleration of Neural Networks with Power-of-Two Quantisation
Dominika Przewlocka-Rus
T. Kryjak
MQ
8
5
0
30 Sep 2022
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for
  Vision Transformers
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers
Zhikai Li
Mengjuan Chen
Junrui Xiao
Qingyi Gu
ViT
MQ
43
33
0
13 Sep 2022
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural
  Network Quantization
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization
Cong Guo
Chen Zhang
Jingwen Leng
Zihan Liu
Fan Yang
Yun-Bo Liu
Minyi Guo
Yuhao Zhu
MQ
16
55
0
30 Aug 2022
Mixed-Precision Neural Networks: A Survey
Mixed-Precision Neural Networks: A Survey
M. Rakka
M. Fouda
Pramod P. Khargonekar
Fadi J. Kurdahi
MQ
18
11
0
11 Aug 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
48
95
0
04 Jul 2022
OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization
OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization
Peng Hu
Xi Peng
Hongyuan Zhu
M. Aly
Jie Lin
MQ
39
59
0
23 May 2022
RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training
  Quantization
RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training Quantization
Hongyi Yao
Pu Li
Jian Cao
Xiangcheng Liu
Chenying Xie
Bin Wang
MQ
19
12
0
26 Apr 2022
SplitNets: Designing Neural Architectures for Efficient Distributed
  Computing on Head-Mounted Systems
SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems
Xin Dong
B. D. Salvo
Meng Li
Chiao Liu
Zhongnan Qu
H. T. Kung
Ziyun Li
3DGS
21
20
0
10 Apr 2022
QDrop: Randomly Dropping Quantization for Extremely Low-bit
  Post-Training Quantization
QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization
Xiuying Wei
Ruihao Gong
Yuhang Li
Xianglong Liu
F. Yu
MQ
VLM
19
166
0
11 Mar 2022
Accurate Neural Training with 4-bit Matrix Multiplications at Standard
  Formats
Accurate Neural Training with 4-bit Matrix Multiplications at Standard Formats
Brian Chmiel
Ron Banner
Elad Hoffer
Hilla Ben Yaacov
Daniel Soudry
MQ
25
22
0
19 Dec 2021
Sharpness-aware Quantization for Deep Neural Networks
Sharpness-aware Quantization for Deep Neural Networks
Jing Liu
Jianfei Cai
Bohan Zhuang
MQ
27
24
0
24 Nov 2021
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
  Mobile Acceleration
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Yifan Gong
Geng Yuan
Zheng Zhan
Wei Niu
Zhengang Li
...
Sijia Liu
Bin Ren
Xue Lin
Xulong Tang
Yanzhi Wang
20
10
0
22 Nov 2021
IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for
  Zero-Shot Network Quantization
IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization
Yunshan Zhong
Mingbao Lin
Gongrui Nan
Jianzhuang Liu
Baochang Zhang
Yonghong Tian
Rongrong Ji
MQ
40
71
0
17 Nov 2021
Haar Wavelet Feature Compression for Quantized Graph Convolutional
  Networks
Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks
Moshe Eliasof
Ben Bodner
Eran Treister
GNN
32
7
0
10 Oct 2021
Elastic Significant Bit Quantization and Acceleration for Deep Neural
  Networks
Elastic Significant Bit Quantization and Acceleration for Deep Neural Networks
Cheng Gong
Ye Lu
Kunpeng Xie
Zongming Jin
Tao Li
Yanzhi Wang
MQ
25
7
0
08 Sep 2021
Quantized Convolutional Neural Networks Through the Lens of Partial
  Differential Equations
Quantized Convolutional Neural Networks Through the Lens of Partial Differential Equations
Ido Ben-Yair
Gil Ben Shalom
Moshe Eliasof
Eran Treister
MQ
16
5
0
31 Aug 2021
Training Multi-bit Quantized and Binarized Networks with A Learnable
  Symmetric Quantizer
Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer
Phuoc Pham
J. Abraham
Jaeyong Chung
MQ
33
11
0
01 Apr 2021
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Kohei Yamamoto
MQ
27
63
0
12 Mar 2021
Robustness and Transferability of Universal Attacks on Compressed Models
Robustness and Transferability of Universal Attacks on Compressed Models
Alberto G. Matachana
Kenneth T. Co
Luis Muñoz-González
David Martínez
Emil C. Lupu
AAML
21
10
0
10 Dec 2020
Once Quantization-Aware Training: High Performance Extremely Low-bit
  Architecture Search
Once Quantization-Aware Training: High Performance Extremely Low-bit Architecture Search
Mingzhu Shen
Feng Liang
Ruihao Gong
Yuhang Li
Chuming Li
Chen Lin
F. Yu
Junjie Yan
Wanli Ouyang
MQ
25
36
0
09 Oct 2020
MSP: An FPGA-Specific Mixed-Scheme, Multi-Precision Deep Neural Network
  Quantization Framework
MSP: An FPGA-Specific Mixed-Scheme, Multi-Precision Deep Neural Network Quantization Framework
Sung-En Chang
Yanyu Li
Mengshu Sun
Weiwen Jiang
Runbin Shi
Xue Lin
Yanzhi Wang
MQ
19
7
0
16 Sep 2020
AQD: Towards Accurate Fully-Quantized Object Detection
AQD: Towards Accurate Fully-Quantized Object Detection
Peng Chen
Jing Liu
Bohan Zhuang
Mingkui Tan
Chunhua Shen
MQ
23
10
0
14 Jul 2020
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
316
1,047
0
10 Feb 2017
1