ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.04729
  4. Cited By
Dithered backprop: A sparse and quantized backpropagation algorithm for
  more efficient deep neural network training
v1v2 (latest)

Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training

9 April 2020
Simon Wiedemann
Temesgen Mehari
Kevin Kepp
Wojciech Samek
ArXiv (abs)PDFHTML

Papers citing "Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training"

11 / 11 papers shown
Less Memory Means smaller GPUs: Backpropagation with Compressed
  Activations
Less Memory Means smaller GPUs: Backpropagation with Compressed Activations
Daniel Barley
Holger Froning
326
0
0
18 Sep 2024
Sparse is Enough in Fine-tuning Pre-trained Large Language Models
Sparse is Enough in Fine-tuning Pre-trained Large Language Models
Weixi Song
Z. Li
Lefei Zhang
Hai Zhao
Bo Du
VLM
440
17
0
19 Dec 2023
Meta-Learning with a Geometry-Adaptive Preconditioner
Meta-Learning with a Geometry-Adaptive PreconditionerComputer Vision and Pattern Recognition (CVPR), 2023
Suhyun Kang
Duhun Hwang
Moonjung Eo
Taesup Kim
Wonjong Rhee
AI4CE
389
28
0
04 Apr 2023
SparseProp: Efficient Sparse Backpropagation for Faster Training of
  Neural Networks
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural NetworksInternational Conference on Machine Learning (ICML), 2023
Mahdi Nikdan
Tommaso Pegolotti
Eugenia Iofinova
Eldar Kurtic
Dan Alistarh
253
14
0
09 Feb 2023
AskewSGD : An Annealed interval-constrained Optimisation method to train
  Quantized Neural Networks
AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural NetworksInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Louis Leconte
S. Schechtman
Eric Moulines
353
4
0
07 Nov 2022
Accurate Neural Training with 4-bit Matrix Multiplications at Standard
  Formats
Accurate Neural Training with 4-bit Matrix Multiplications at Standard FormatsInternational Conference on Learning Representations (ICLR), 2021
Brian Chmiel
Ron Banner
Elad Hoffer
Hilla Ben Yaacov
Daniel Soudry
MQ
459
32
0
19 Dec 2021
L2ight: Enabling On-Chip Learning for Optical Neural Networks via
  Efficient in-situ Subspace Optimization
L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace OptimizationNeural Information Processing Systems (NeurIPS), 2021
Jiaqi Gu
Hanqing Zhu
Chenghao Feng
Zixuan Jiang
Ray T. Chen
David Z. Pan
187
38
0
27 Oct 2021
No frame left behind: Full Video Action Recognition
No frame left behind: Full Video Action RecognitionComputer Vision and Pattern Recognition (CVPR), 2021
X. Liu
S. Pintea
Fatemeh Karimi Nejadasl
Olaf Booij
Jan van Gemert
338
46
0
29 Mar 2021
Neural gradients are near-lognormal: improved quantized and sparse
  training
Neural gradients are near-lognormal: improved quantized and sparse training
Brian Chmiel
Liad Ben-Uri
Moran Shkolnik
Elad Hoffer
Ron Banner
Daniel Soudry
MQ
329
5
0
15 Jun 2020
On-Device Machine Learning: An Algorithms and Learning Theory
  Perspective
On-Device Machine Learning: An Algorithms and Learning Theory Perspective
Sauptik Dhar
Junyao Guo
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
544
180
0
02 Nov 2019
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
DeepCABAC: A Universal Compression Algorithm for Deep Neural NetworksIEEE Journal on Selected Topics in Signal Processing (JSTSP), 2019
Simon Wiedemann
H. Kirchhoffer
Stefan Matlage
Paul Haase
Arturo Marbán
...
Ahmed Osman
D. Marpe
H. Schwarz
Thomas Wiegand
Wojciech Samek
292
108
0
27 Jul 2019
1
Page 1 of 1