ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.04319
  4. Cited By
Model compression as constrained optimization, with application to
  neural nets. Part II: quantization

Model compression as constrained optimization, with application to neural nets. Part II: quantization

13 July 2017
M. A. Carreira-Perpiñán
Yerlan Idelbayev
    MQ
ArXivPDFHTML

Papers citing "Model compression as constrained optimization, with application to neural nets. Part II: quantization"

8 / 8 papers shown
Title
On Model Compression for Neural Networks: Framework, Algorithm, and
  Convergence Guarantee
On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Chenyang Li
Jihoon Chung
Mengnan Du
Haimin Wang
Xianlian Zhou
Bohao Shen
33
1
0
13 Mar 2023
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Dan Liu
X. Chen
Chen-li Ma
Xue Liu
MQ
27
3
0
24 Dec 2022
AskewSGD : An Annealed interval-constrained Optimisation method to train
  Quantized Neural Networks
AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks
Louis Leconte
S. Schechtman
Eric Moulines
29
4
0
07 Nov 2022
Toward Compact Parameter Representations for Architecture-Agnostic
  Neural Network Compression
Toward Compact Parameter Representations for Architecture-Agnostic Neural Network Compression
Yuezhou Sun
Wenlong Zhao
Lijun Zhang
Xiao Liu
Hui Guan
Matei A. Zaharia
26
0
0
19 Nov 2021
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Pierre Stock
Armand Joulin
Rémi Gribonval
Benjamin Graham
Hervé Jégou
MQ
29
149
0
12 Jul 2019
Blended Coarse Gradient Descent for Full Quantization of Deep Neural
  Networks
Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks
Penghang Yin
Shuai Zhang
J. Lyu
Stanley Osher
Y. Qi
Jack Xin
MQ
36
61
0
15 Aug 2018
A Survey on Methods and Theories of Quantized Neural Networks
A Survey on Methods and Theories of Quantized Neural Networks
Yunhui Guo
MQ
27
230
0
13 Aug 2018
BinaryRelax: A Relaxation Approach For Training Deep Neural Networks
  With Quantized Weights
BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights
Penghang Yin
Shuai Zhang
J. Lyu
Stanley Osher
Y. Qi
Jack Xin
MQ
22
78
0
19 Jan 2018
1