ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.08530
  4. Cited By
Training wide residual networks for deployment using a single bit for
  each weight

Training wide residual networks for deployment using a single bit for each weight

23 February 2018
Mark D Mcdonnell
    MQ
ArXivPDFHTML

Papers citing "Training wide residual networks for deployment using a single bit for each weight"

31 / 31 papers shown
Title
Development of Skip Connection in Deep Neural Networks for Computer
  Vision and Medical Image Analysis: A Survey
Development of Skip Connection in Deep Neural Networks for Computer Vision and Medical Image Analysis: A Survey
Guoping Xu
Xiaxia Wang
Xinglong Wu
Xuesong Leng
Yongchao Xu
3DPC
34
8
0
02 May 2024
Ada-QPacknet -- adaptive pruning with bit width reduction as an
  efficient continual learning method without forgetting
Ada-QPacknet -- adaptive pruning with bit width reduction as an efficient continual learning method without forgetting
Marcin Pietroñ
Dominik Zurek
Kamil Faber
Roberto Corizzo
CLL
29
2
0
14 Aug 2023
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Dan Liu
X. Chen
Chen-li Ma
Xue Liu
MQ
22
3
0
24 Dec 2022
AskewSGD : An Annealed interval-constrained Optimisation method to train
  Quantized Neural Networks
AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks
Louis Leconte
S. Schechtman
Eric Moulines
27
4
0
07 Nov 2022
LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient
  Inference in Large-Scale Generative Language Models
LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models
Gunho Park
Baeseong Park
Minsub Kim
Sungjae Lee
Jeonghoon Kim
Beomseok Kwon
S. Kwon
Byeongwook Kim
Youngjoo Lee
Dongsoo Lee
MQ
13
73
0
20 Jun 2022
UDC: Unified DNAS for Compressible TinyML Models
UDC: Unified DNAS for Compressible TinyML Models
Igor Fedorov
Ramon Matas
Hokchhay Tann
Chu Zhou
Matthew Mattina
P. Whatmough
AI4CE
21
13
0
15 Jan 2022
Toward Compact Parameter Representations for Architecture-Agnostic
  Neural Network Compression
Toward Compact Parameter Representations for Architecture-Agnostic Neural Network Compression
Yuezhou Sun
Wenlong Zhao
Lijun Zhang
Xiao Liu
Hui Guan
Matei A. Zaharia
21
0
0
19 Nov 2021
Pruning Ternary Quantization
Danyang Liu
Xiangshan Chen
Jie Fu
Chen-li Ma
Xue Liu
MQ
31
0
0
23 Jul 2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAML
MQ
20
18
0
16 Apr 2021
Sparsity-Control Ternary Weight Networks
Sparsity-Control Ternary Weight Networks
Xiang Deng
Zhongfei Zhang
MQ
12
8
0
01 Nov 2020
Reducing the Computational Cost of Deep Generative Models with Binary
  Neural Networks
Reducing the Computational Cost of Deep Generative Models with Binary Neural Networks
Thomas Bird
F. Kingma
David Barber
SyDa
MQ
AI4CE
18
9
0
26 Oct 2020
Binarized Neural Architecture Search for Efficient Object Recognition
Binarized Neural Architecture Search for Efficient Object Recognition
Hanlin Chen
Lian Zhuo
Baochang Zhang
Xiawu Zheng
Jianzhuang Liu
Rongrong Ji
David Doermann
G. Guo
MQ
8
18
0
08 Sep 2020
Training with Quantization Noise for Extreme Model Compression
Training with Quantization Noise for Extreme Model Compression
Angela Fan
Pierre Stock
Benjamin Graham
Edouard Grave
Remi Gribonval
Hervé Jégou
Armand Joulin
MQ
16
243
0
15 Apr 2020
Improved Gradient based Adversarial Attacks for Quantized Networks
Improved Gradient based Adversarial Attacks for Quantized Networks
Kartik Gupta
Thalaiyasingam Ajanthan
MQ
8
19
0
30 Mar 2020
Iterative Averaging in the Quest for Best Test Error
Iterative Averaging in the Quest for Best Test Error
Diego Granziol
Xingchen Wan
Samuel Albanie
Stephen J. Roberts
8
3
0
02 Mar 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
32
72
0
07 Jan 2020
Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks
Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks
Sébastien Henwood
François Leduc-Primeau
Yvon Savaria
18
10
0
23 Dec 2019
Binarized Neural Architecture Search
Binarized Neural Architecture Search
Hanlin Chen
Lian Zhuo
Baochang Zhang
Xiawu Zheng
Jianzhuang Liu
David Doermann
Rongrong Ji
MQ
18
25
0
25 Nov 2019
Circulant Binary Convolutional Networks: Enhancing the Performance of
  1-bit DCNNs with Circulant Back Propagation
Circulant Binary Convolutional Networks: Enhancing the Performance of 1-bit DCNNs with Circulant Back Propagation
Chunlei Liu
Wenrui Ding
Xin Xia
Baochang Zhang
Jiaxin Gu
Jianzhuang Liu
Rongrong Ji
David Doermann
MQ
6
73
0
24 Oct 2019
Mirror Descent View for Neural Network Quantization
Mirror Descent View for Neural Network Quantization
Thalaiyasingam Ajanthan
Kartik Gupta
Philip H. S. Torr
Richard I. Hartley
P. Dokania
MQ
14
23
0
18 Oct 2019
Single-bit-per-weight deep convolutional neural networks without
  batch-normalization layers for embedded systems
Single-bit-per-weight deep convolutional neural networks without batch-normalization layers for embedded systems
Mark D Mcdonnell
Hesham Mostafa
Runchun Wang
Andre van Schaik
MQ
15
2
0
16 Jul 2019
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Pierre Stock
Armand Joulin
Rémi Gribonval
Benjamin Graham
Hervé Jégou
MQ
29
149
0
12 Jul 2019
Modulated binary cliquenet
Modulated binary cliquenet
Jinpeng Xia
Jiasong Wu
Youyong Kong
Pinzheng Zhang
L. Senhadji
H. Shu
MQ
11
0
0
27 Feb 2019
Efficient Memory Management for GPU-based Deep Learning Systems
Efficient Memory Management for GPU-based Deep Learning Systems
Junzhe Zhang
Sai-Ho Yeung
Yao Shu
Bingsheng He
Wei Wang
14
41
0
19 Feb 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
29
307
0
15 Feb 2019
Fast Adjustable Threshold For Uniform Neural Network Quantization
  (Winning solution of LPIRC-II)
Fast Adjustable Threshold For Uniform Neural Network Quantization (Winning solution of LPIRC-II)
A. Goncharenko
Andrey Denisov
S. Alyamkin
Evgeny Terentev
MQ
12
20
0
19 Dec 2018
Projection Convolutional Neural Networks for 1-bit CNNs via Discrete
  Back Propagation
Projection Convolutional Neural Networks for 1-bit CNNs via Discrete Back Propagation
Jiaxin Gu
Ce Li
Baochang Zhang
J. Han
Xianbin Cao
Jianzhuang Liu
David Doermann
3DV
168
86
0
30 Nov 2018
Probabilistic Binary Neural Networks
Probabilistic Binary Neural Networks
Jorn W. T. Peters
Max Welling
BDL
UQCV
MQ
17
50
0
10 Sep 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
11
75
0
17 Jul 2018
Retraining-Based Iterative Weight Quantization for Deep Neural Networks
Retraining-Based Iterative Weight Quantization for Deep Neural Networks
Dongsoo Lee
Byeongwook Kim
MQ
28
16
0
29 May 2018
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
316
1,047
0
10 Feb 2017
1