ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03289
  4. Cited By
Deep Neural Network Compression with Single and Multiple Level
  Quantization

Deep Neural Network Compression with Single and Multiple Level Quantization

6 March 2018
Yuhui Xu
Yongzhuang Wang
Aojun Zhou
Weiyao Lin
H. Xiong
    MQ
ArXivPDFHTML

Papers citing "Deep Neural Network Compression with Single and Multiple Level Quantization"

10 / 10 papers shown
Title
On GNN explanability with activation rules
On GNN explanability with activation rules
Luca Veyrin-Forrer
Ataollah Kamal
Stefan Duffner
Marc Plantevit
C. Robardet
AI4CE
21
2
0
17 Jun 2024
Automated Heterogeneous Low-Bit Quantization of Multi-Model Deep
  Learning Inference Pipeline
Automated Heterogeneous Low-Bit Quantization of Multi-Model Deep Learning Inference Pipeline
Jayeeta Mondal
Swarnava Dey
Arijit Mukherjee
MQ
13
1
0
10 Nov 2023
On Model Compression for Neural Networks: Framework, Algorithm, and
  Convergence Guarantee
On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Chenyang Li
Jihoon Chung
Mengnan Du
Haimin Wang
Xianlian Zhou
Bohao Shen
33
1
0
13 Mar 2023
LAB: Learnable Activation Binarizer for Binary Neural Networks
LAB: Learnable Activation Binarizer for Binary Neural Networks
Sieger Falkena
Hadi Jamali Rad
J. C. V. Gemert
MQ
24
3
0
25 Oct 2022
Croesus: Multi-Stage Processing and Transactions for Video-Analytics in
  Edge-Cloud Systems
Croesus: Multi-Stage Processing and Transactions for Video-Analytics in Edge-Cloud Systems
Samaa Gazzaz
Vishal Chakraborty
Faisal Nawab
20
10
0
31 Dec 2021
Low-rank Tensor Decomposition for Compression of Convolutional Neural
  Networks Using Funnel Regularization
Low-rank Tensor Decomposition for Compression of Convolutional Neural Networks Using Funnel Regularization
Bo-Shiuan Chu
Che-Rung Lee
13
11
0
07 Dec 2021
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
Yang Sui
Miao Yin
Yi Xie
Huy Phan
S. Zonouz
Bo Yuan
VLM
14
127
0
26 Oct 2021
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Kohei Yamamoto
MQ
20
63
0
12 Mar 2021
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit
  Neural Networks
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks
Ruihao Gong
Xianglong Liu
Shenghu Jiang
Tian-Hao Li
Peng Hu
Jiazhen Lin
F. Yu
Junjie Yan
MQ
19
445
0
14 Aug 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
1