ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.01462
  4. Cited By
Ternary Neural Networks with Fine-Grained Quantization

Ternary Neural Networks with Fine-Grained Quantization

2 May 2017
Naveen Mellempudi
Abhisek Kundu
Dheevatsa Mudigere
Dipankar Das
Bharat Kaul
Pradeep Dubey
    MQ
ArXivPDFHTML

Papers citing "Ternary Neural Networks with Fine-Grained Quantization"

25 / 25 papers shown
Title
Quality Scalable Quantization Methodology for Deep Learning on Edge
Quality Scalable Quantization Methodology for Deep Learning on Edge
S. Khaliq
Rehan Hafiz
MQ
38
1
0
15 Jul 2024
Mixed-Precision Neural Networks: A Survey
Mixed-Precision Neural Networks: A Survey
M. Rakka
M. Fouda
Pramod P. Khargonekar
Fadi J. Kurdahi
MQ
21
11
0
11 Aug 2022
CBP: Backpropagation with constraint on weight precision using a
  pseudo-Lagrange multiplier method
CBP: Backpropagation with constraint on weight precision using a pseudo-Lagrange multiplier method
Guhyun Kim
D. Jeong
MQ
42
2
0
06 Oct 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
130
673
0
24 Jan 2021
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
32
72
0
07 Jan 2020
Towards Efficient Training for Neural Network Quantization
Towards Efficient Training for Neural Network Quantization
Qing Jin
Linjie Yang
Zhenyu A. Liao
MQ
11
42
0
21 Dec 2019
Loss Aware Post-training Quantization
Loss Aware Post-training Quantization
Yury Nahshan
Brian Chmiel
Chaim Baskin
Evgenii Zheltonozhskii
Ron Banner
A. Bronstein
A. Mendelson
MQ
26
163
0
17 Nov 2019
TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks
TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks
Shubham Jain
S. Gupta
A. Raghunathan
MQ
22
37
0
15 Sep 2019
Unrolling Ternary Neural Networks
Unrolling Ternary Neural Networks
Stephen Tridgell
M. Kumm
M. Hardieck
David Boland
Duncan J. M. Moss
P. Zipf
Philip H. W. Leong
16
26
0
09 Sep 2019
GDRQ: Group-based Distribution Reshaping for Quantization
GDRQ: Group-based Distribution Reshaping for Quantization
Haibao Yu
Tuopu Wen
Guangliang Cheng
Jiankai Sun
Qi Han
Jianping Shi
MQ
25
3
0
05 Aug 2019
Efficient 8-Bit Quantization of Transformer Neural Machine Language
  Translation Model
Efficient 8-Bit Quantization of Transformer Neural Machine Language Translation Model
Aishwarya Bhandare
Vamsi Sripathi
Deepthi Karkada
Vivek V. Menon
Sun Choi
Kushal Datta
V. Saletore
MQ
19
129
0
03 Jun 2019
Low-bit Quantization of Neural Networks for Efficient Inference
Low-bit Quantization of Neural Networks for Efficient Inference
Yoni Choukroun
Eli Kravchik
Fan Yang
P. Kisilev
MQ
16
355
0
18 Feb 2019
AutoQ: Automated Kernel-Wise Neural Network Quantization
AutoQ: Automated Kernel-Wise Neural Network Quantization
Qian Lou
Feng Guo
Lantao Liu
Minje Kim
Lei Jiang
MQ
19
97
0
15 Feb 2019
Efficient Hybrid Network Architectures for Extremely Quantized Neural
  Networks Enabling Intelligence at the Edge
Efficient Hybrid Network Architectures for Extremely Quantized Neural Networks Enabling Intelligence at the Edge
I. Chakraborty
Deboleena Roy
Aayush Ankit
Kaushik Roy
GNN
MQ
14
13
0
01 Feb 2019
A Survey on Methods and Theories of Quantized Neural Networks
A Survey on Methods and Theories of Quantized Neural Networks
Yunhui Guo
MQ
27
230
0
13 Aug 2018
PACT: Parameterized Clipping Activation for Quantized Neural Networks
PACT: Parameterized Clipping Activation for Quantized Neural Networks
Jungwook Choi
Zhuo Wang
Swagath Venkataramani
P. Chuang
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
11
936
0
16 May 2018
Efficient Contextualized Representation: Language Model Pruning for
  Sequence Labeling
Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Liyuan Liu
Xiang Ren
Jingbo Shang
Jian-wei Peng
Jiawei Han
17
44
0
20 Apr 2018
Deep Neural Network Compression with Single and Multiple Level
  Quantization
Deep Neural Network Compression with Single and Multiple Level Quantization
Yuhui Xu
Yongzhuang Wang
Aojun Zhou
Weiyao Lin
H. Xiong
MQ
20
114
0
06 Mar 2018
Compressing Neural Networks using the Variational Information Bottleneck
Compressing Neural Networks using the Variational Information Bottleneck
Bin Dai
Chen Zhu
David Wipf
MLT
24
178
0
28 Feb 2018
Loss-aware Weight Quantization of Deep Networks
Loss-aware Weight Quantization of Deep Networks
Lu Hou
James T. Kwok
MQ
24
127
0
23 Feb 2018
Quantization and Training of Neural Networks for Efficient
  Integer-Arithmetic-Only Inference
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
48
3,043
0
15 Dec 2017
WRPN: Wide Reduced-Precision Networks
WRPN: Wide Reduced-Precision Networks
Asit K. Mishra
Eriko Nurvitadhi
Jeffrey J. Cook
Debbie Marr
MQ
25
266
0
04 Sep 2017
SEP-Nets: Small and Effective Pattern Networks
SEP-Nets: Small and Effective Pattern Networks
Zhe Li
Xiaoyu Wang
Xutao Lv
Tianbao Yang
22
12
0
13 Jun 2017
Bayesian Compression for Deep Learning
Bayesian Compression for Deep Learning
Christos Louizos
Karen Ullrich
Max Welling
UQCV
BDL
20
479
0
24 May 2017
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
319
1,049
0
10 Feb 2017
1