ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.08530
  4. Cited By
Training wide residual networks for deployment using a single bit for
  each weight

Training wide residual networks for deployment using a single bit for each weight

23 February 2018
Mark D Mcdonnell
    MQ
ArXiv (abs)PDFHTMLGithub (36★)

Papers citing "Training wide residual networks for deployment using a single bit for each weight"

27 / 27 papers shown
Title
Development of Skip Connection in Deep Neural Networks for Computer
  Vision and Medical Image Analysis: A Survey
Development of Skip Connection in Deep Neural Networks for Computer Vision and Medical Image Analysis: A SurveyEngineering applications of artificial intelligence (EAAI), 2024
Guoping Xu
Xiaxia Wang
Xinglong Wu
Xuesong Leng
Yongchao Xu
3DPC
187
12
0
02 May 2024
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Hyperspherical Quantization: Toward Smaller and More Accurate ModelsIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Dan Liu
X. Chen
Chen Ma
Xue Liu
MQ
141
4
0
24 Dec 2022
AskewSGD : An Annealed interval-constrained Optimisation method to train
  Quantized Neural Networks
AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural NetworksInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Louis Leconte
S. Schechtman
Eric Moulines
195
4
0
07 Nov 2022
UDC: Unified DNAS for Compressible TinyML Models
UDC: Unified DNAS for Compressible TinyML ModelsNeural Information Processing Systems (NeurIPS), 2022
Igor Fedorov
Ramon Matas
Hokchhay Tann
Chu Zhou
Matthew Mattina
P. Whatmough
AI4CE
234
16
0
15 Jan 2022
Toward Compact Parameter Representations for Architecture-Agnostic
  Neural Network Compression
Toward Compact Parameter Representations for Architecture-Agnostic Neural Network Compression
Yuezhou Sun
Wenlong Zhao
Lijun Zhang
Xiao Liu
Hui Guan
Matei A. Zaharia
168
0
0
19 Nov 2021
Pruning Ternary Quantization
Danyang Liu
Xiangshan Chen
Jie Fu
Chen Ma
Xue Liu
MQ
316
0
0
23 Jul 2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN AcceleratorsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAMLMQ
188
19
0
16 Apr 2021
Sparsity-Control Ternary Weight Networks
Sparsity-Control Ternary Weight NetworksNeural Networks (NN), 2020
Xiang Deng
Zhongfei Zhang
MQ
207
8
0
01 Nov 2020
Reducing the Computational Cost of Deep Generative Models with Binary
  Neural Networks
Reducing the Computational Cost of Deep Generative Models with Binary Neural NetworksInternational Conference on Learning Representations (ICLR), 2020
Thomas Bird
F. Kingma
David Barber
SyDaMQAI4CE
221
10
0
26 Oct 2020
Binarized Neural Architecture Search for Efficient Object Recognition
Binarized Neural Architecture Search for Efficient Object RecognitionInternational Journal of Computer Vision (IJCV), 2020
Hanlin Chen
Lian Zhuo
Baochang Zhang
Xiawu Zheng
Jianzhuang Liu
Rongrong Ji
David Doermann
G. Guo
MQ
118
19
0
08 Sep 2020
Training with Quantization Noise for Extreme Model Compression
Training with Quantization Noise for Extreme Model CompressionInternational Conference on Learning Representations (ICLR), 2020
Angela Fan
Pierre Stock
Benjamin Graham
Edouard Grave
Remi Gribonval
Edouard Grave
Armand Joulin
MQ
235
256
0
15 Apr 2020
Improved Gradient based Adversarial Attacks for Quantized Networks
Improved Gradient based Adversarial Attacks for Quantized NetworksAAAI Conference on Artificial Intelligence (AAAI), 2020
Kartik Gupta
Thalaiyasingam Ajanthan
MQ
102
21
0
30 Mar 2020
Iterative Averaging in the Quest for Best Test Error
Iterative Averaging in the Quest for Best Test ErrorJournal of machine learning research (JMLR), 2020
Diego Granziol
Xingchen Wan
Samuel Albanie
Stephen J. Roberts
209
3
0
02 Mar 2020
Sparse Weight Activation Training
Sparse Weight Activation TrainingNeural Information Processing Systems (NeurIPS), 2020
Md Aamir Raihan
Tor M. Aamodt
266
77
0
07 Jan 2020
Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks
Layerwise Noise Maximisation to Train Low-Energy Deep Neural NetworksInternational Conference on Artificial Intelligence Circuits and Systems (AICAS), 2019
Sébastien Henwood
François Leduc-Primeau
Yvon Savaria
91
10
0
23 Dec 2019
Binarized Neural Architecture Search
Binarized Neural Architecture SearchAAAI Conference on Artificial Intelligence (AAAI), 2019
Hanlin Chen
Lian Zhuo
Baochang Zhang
Xiawu Zheng
Jianzhuang Liu
David Doermann
Rongrong Ji
MQ
184
27
0
25 Nov 2019
Circulant Binary Convolutional Networks: Enhancing the Performance of
  1-bit DCNNs with Circulant Back Propagation
Circulant Binary Convolutional Networks: Enhancing the Performance of 1-bit DCNNs with Circulant Back PropagationComputer Vision and Pattern Recognition (CVPR), 2019
Chunlei Liu
Wenrui Ding
Xin Xia
Baochang Zhang
Jiaxin Gu
Jianzhuang Liu
Rongrong Ji
David Doermann
MQ
226
79
0
24 Oct 2019
Mirror Descent View for Neural Network Quantization
Mirror Descent View for Neural Network QuantizationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Thalaiyasingam Ajanthan
Kartik Gupta
Juil Sock
Leonid Sigal
P. Dokania
MQ
198
27
0
18 Oct 2019
Single-bit-per-weight deep convolutional neural networks without
  batch-normalization layers for embedded systems
Single-bit-per-weight deep convolutional neural networks without batch-normalization layers for embedded systemsAsia-Pacific Conference on Intelligent Robot Systems (AIRS), 2019
Mark D Mcdonnell
Hesham Mostafa
Runchun Wang
Andre van Schaik
MQ
122
2
0
16 Jul 2019
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
And the Bit Goes Down: Revisiting the Quantization of Neural NetworksInternational Conference on Learning Representations (ICLR), 2019
Pierre Stock
Armand Joulin
Rémi Gribonval
Benjamin Graham
Edouard Grave
MQ
357
154
0
12 Jul 2019
Modulated binary cliquenet
Modulated binary cliquenet
Jinpeng Xia
Jiasong Wu
Youyong Kong
Pinzheng Zhang
L. Senhadji
H. Shu
MQ
81
1
0
27 Feb 2019
Efficient Memory Management for GPU-based Deep Learning Systems
Efficient Memory Management for GPU-based Deep Learning Systems
Junzhe Zhang
Sai-Ho Yeung
Yao Shu
Bingsheng He
Wei Wang
130
45
0
19 Feb 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
328
329
0
15 Feb 2019
Fast Adjustable Threshold For Uniform Neural Network Quantization
  (Winning solution of LPIRC-II)
Fast Adjustable Threshold For Uniform Neural Network Quantization (Winning solution of LPIRC-II)
A. Goncharenko
Andrey Denisov
S. Alyamkin
Evgeny Terentev
MQ
185
21
0
19 Dec 2018
Projection Convolutional Neural Networks for 1-bit CNNs via Discrete
  Back Propagation
Projection Convolutional Neural Networks for 1-bit CNNs via Discrete Back Propagation
Jiaxin Gu
Ce Li
Baochang Zhang
Jiawei Han
Xianbin Cao
Jianzhuang Liu
David Doermann
3DV
446
88
0
30 Nov 2018
Probabilistic Binary Neural Networks
Probabilistic Binary Neural Networks
Jorn W. T. Peters
Max Welling
BDLUQCVMQ
154
53
0
10 Sep 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
143
80
0
17 Jul 2018
1