Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.06197
Cited By
meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting
19 June 2017
Xu Sun
Xuancheng Ren
Shuming Ma
Houfeng Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting"
50 / 71 papers shown
Title
ssProp: Energy-Efficient Training for Convolutional Neural Networks with Scheduled Sparse Back Propagation
Lujia Zhong
Shuo Huang
Yonggang Shi
51
0
0
31 Dec 2024
Zeroth-Order Fine-Tuning of LLMs in Random Subspaces
Ziming Yu
Pan Zhou
Sike Wang
Jia Li
Hua Huang
36
1
0
11 Oct 2024
SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers
V. Chekalina
Anna Rudenko
Gleb Mezentsev
Alexander Mikhalev
Alexander Panchenko
Ivan Oseledets
18
0
0
09 Oct 2024
OD-Stega: LLM-Based Near-Imperceptible Steganography via Optimized Distributions
Yu-Shin Huang
Peter Just
Krishna Narayanan
Chao Tian
34
4
0
06 Oct 2024
Advancing On-Device Neural Network Training with TinyPropv2: Dynamic, Sparse, and Efficient Backpropagation
Marcus Rüb
Axel Sikora
Daniel Mueller-Gritschneder
35
1
0
11 Sep 2024
Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models
Tanmay Gautam
Youngsuk Park
Hao Zhou
Parameswaran Raman
Wooseok Ha
43
11
0
11 Apr 2024
Diversity-aware Channel Pruning for StyleGAN Compression
Jiwoo Chung
Sangeek Hyun
Sang-Heon Shim
Jae-Pil Heo
32
4
0
20 Mar 2024
Data Reconstruction Attacks and Defenses: A Systematic Evaluation
Sheng Liu
Zihan Wang
Yuxiao Chen
Qi Lei
AAML
MIACV
61
4
0
13 Feb 2024
ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection
Kai Huang
Boyuan Yang
Wei Gao
32
18
0
21 Dec 2023
Straggler-resilient Federated Learning: Tackling Computation Heterogeneity with Layer-wise Partial Model Training in Mobile Edge Network
Student Member Ieee Hongda Wu
F. I. C. V. Ping Wang
Aswartha Narayana
FedML
52
1
0
16 Nov 2023
Understanding Parameter Saliency via Extreme Value Theory
Shuo Wang
Issei Sato
AAML
FAtt
24
0
0
27 Oct 2023
TinyProp -- Adaptive Sparse Backpropagation for Efficient TinyML On-device Learning
Marcus Rüb
Daniel Maier
Daniel Mueller-Gritschneder
Axel Sikora
34
3
0
17 Aug 2023
Efficient Online Processing with Deep Neural Networks
Lukas Hedegaard
26
0
0
23 Jun 2023
Fine-Tuning Language Models with Just Forward Passes
Sadhika Malladi
Tianyu Gao
Eshaan Nichani
Alexandru Damian
Jason D. Lee
Danqi Chen
Sanjeev Arora
32
177
0
27 May 2023
Towards Accurate Post-Training Quantization for Vision Transformer
Yifu Ding
Haotong Qin
Qing-Yu Yan
Z. Chai
Junjie Liu
Xiaolin K. Wei
Xianglong Liu
MQ
54
68
0
25 Mar 2023
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
31
87
0
13 Jan 2023
Structured Pruning Adapters
Lukas Hedegaard
Aman Alok
Juby Jose
Alexandros Iosifidis
38
10
0
17 Nov 2022
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training
Mingliang Xu
Gongrui Nan
Yuxin Zhang
Rongrong Ji
Rongrong Ji
MQ
18
3
0
12 Nov 2022
ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity
Xinchi Qiu
Javier Fernandez-Marques
Pedro Gusmão
Yan Gao
Titouan Parcollet
Nicholas D. Lane
FedML
55
67
0
04 Aug 2022
Optimization with Access to Auxiliary Information
El Mahdi Chayti
Sai Praneeth Karimireddy
AAML
17
10
0
01 Jun 2022
Aligned Weight Regularizers for Pruning Pretrained Neural Networks
J. Ó. Neill
Sourav Dutta
H. Assem
VLM
19
2
0
04 Apr 2022
Minimum Variance Unbiased N:M Sparsity for the Neural Gradients
Brian Chmiel
Itay Hubara
Ron Banner
Daniel Soudry
21
10
0
21 Mar 2022
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Leander Weber
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
47
90
0
15 Mar 2022
L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization
Jiaqi Gu
Hanqing Zhu
Chenghao Feng
Zixuan Jiang
Ray T. Chen
David Z. Pan
26
29
0
27 Oct 2021
Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights
Jang-Hyun Kim
Jayeon Yoo
Yeji Song
Kiyoon Yoo
Nojun Kwak
29
6
0
10 Sep 2021
Efficient Visual Recognition with Deep Neural Networks: A Survey on Recent Advances and New Directions
Yang Wu
Dingheng Wang
Xiaotong Lu
Fan Yang
Guoqi Li
W. Dong
Jianbo Shi
29
18
0
30 Aug 2021
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Roman Levin
Manli Shu
Eitan Borgnia
Furong Huang
Micah Goldblum
Tom Goldstein
FAtt
AAML
25
10
0
03 Aug 2021
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information
Elias Frantar
Eldar Kurtic
Dan Alistarh
13
57
0
07 Jul 2021
Masked Training of Neural Networks with Partial Gradients
Amirkeivan Mohtashami
Martin Jaggi
Sebastian U. Stich
15
22
0
16 Jun 2021
Toward Compact Deep Neural Networks via Energy-Aware Pruning
Seul-Ki Yeom
Kyung-Hwan Shim
Jee-Hyun Hwang
CVBM
28
12
0
19 Mar 2021
Contextual Interference Reduction by Selective Fine-Tuning of Neural Networks
Mahdi Biparva
John K. Tsotsos
DRL
13
0
0
21 Nov 2020
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
22
15
0
15 Oct 2020
TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Omar Mohamed Awad
Gennady Pekhimenko
Jorge Albericio
Andreas Moshovos
MoE
21
59
0
01 Sep 2020
Randomized Automatic Differentiation
Deniz Oktay
N. McGreivy
Joshua Aduol
Alex Beatson
Ryan P. Adams
ODL
22
26
0
20 Jul 2020
Neural gradients are near-lognormal: improved quantized and sparse training
Brian Chmiel
Liad Ben-Uri
Moran Shkolnik
Elad Hoffer
Ron Banner
Daniel Soudry
MQ
6
5
0
15 Jun 2020
Efficient Sparse-Dense Matrix-Matrix Multiplication on GPUs Using the Customized Sparse Storage Format
S. Shi
Qiang-qiang Wang
X. Chu
11
10
0
29 May 2020
Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training
Simon Wiedemann
Temesgen Mehari
Kevin Kepp
Wojciech Samek
27
18
0
09 Apr 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
S. Shi
Wei Wang
Bo-wen Li
Xiaowen Chu
21
48
0
10 Mar 2020
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
34
73
0
07 Jan 2020
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Seul-Ki Yeom
P. Seegerer
Sebastian Lapuschkin
Alexander Binder
Simon Wiedemann
K. Müller
Wojciech Samek
CVBM
21
198
0
18 Dec 2019
SparseTrain:Leveraging Dynamic Sparsity in Training DNNs on General-Purpose SIMD Processors
Zhangxiaowen Gong
Houxiang Ji
Christopher W. Fletcher
C. Hughes
Josep Torrellas
25
5
0
22 Nov 2019
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization
Farzin Haddadpour
Mohammad Mahdi Kamani
M. Mahdavi
V. Cadambe
FedML
33
199
0
30 Oct 2019
Accelerating Training using Tensor Decomposition
Mostafa Elhoushi
Ye Tian
Zihao Chen
F. Shafiq
Joey Yiwei Li
14
3
0
10 Sep 2019
Automatic Compiler Based FPGA Accelerator for CNN Training
S. Venkataramanaiah
Yufei Ma
Shihui Yin
Eriko Nurvitadhi
A. Dasu
Yu Cao
Jae-sun Seo
24
38
0
15 Aug 2019
Accelerated CNN Training Through Gradient Approximation
Ziheng Wang
Sree Harsha Nelaturu
101
5
0
15 Aug 2019
Federated Learning over Wireless Fading Channels
M. Amiri
Deniz Gunduz
33
507
0
23 Jul 2019
Learning Sparse Networks Using Targeted Dropout
Aidan Gomez
Ivan Zhang
Siddhartha Rao Kamalakara
Divyam Madaan
Kevin Swersky
Y. Gal
Geoffrey E. Hinton
17
98
0
31 May 2019
Memorized Sparse Backpropagation
Zhiyuan Zhang
Pengcheng Yang
Xuancheng Ren
Qi Su
Xu Sun
13
13
0
24 May 2019
Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air
Mohammad Mohammadi Amiri
Deniz Gunduz
27
53
0
03 Jan 2019
Dynamic Sparse Graph for Efficient Deep Learning
L. Liu
Lei Deng
Xing Hu
Maohua Zhu
Guoqi Li
Yufei Ding
Yuan Xie
GNN
32
42
0
01 Oct 2018
1
2
Next