ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.05967
  4. Cited By
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization

Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization

15 February 2019
Hesham Mostafa
Xin Wang
ArXivPDFHTML

Papers citing "Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization"

15 / 65 papers shown
Title
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Aojun Zhou
Yukun Ma
Junnan Zhu
Jianbo Liu
Zhijie Zhang
Kun Yuan
Wenxiu Sun
Hongsheng Li
38
239
0
08 Feb 2021
AttentionLite: Towards Efficient Self-Attention Models for Vision
AttentionLite: Towards Efficient Self-Attention Models for Vision
Souvik Kundu
Sairam Sundaresan
8
22
0
21 Dec 2020
Quick and Robust Feature Selection: the Strength of Energy-efficient
  Sparse Training for Autoencoders
Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders
Zahra Atashgahi
Ghada Sokar
T. Lee
Elena Mocanu
D. Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
9
37
0
01 Dec 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
23
25
0
20 Nov 2020
FPRaker: A Processing Element For Accelerating Neural Network Training
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
20
15
0
15 Oct 2020
Training highly effective connectivities within neural networks with
  randomly initialized, fixed weights
Training highly effective connectivities within neural networks with randomly initialized, fixed weights
Cristian Ivan
Razvan V. Florian
11
4
0
30 Jun 2020
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip H. S. Torr
Grégory Rogez
P. Dokania
10
95
0
16 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
40
98
0
05 Jun 2020
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
  Trainable Masked Layers
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
9
119
0
14 May 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
32
72
0
07 Jan 2020
Sparse Networks from Scratch: Faster Training without Losing Performance
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
20
333
0
10 Jul 2019
On improving deep learning generalization with adaptive sparse
  connectivity
On improving deep learning generalization with adaptive sparse connectivity
Shiwei Liu
D. Mocanu
Mykola Pechenizkiy
ODL
12
7
0
27 Jun 2019
Intrinsically Sparse Long Short-Term Memory Networks
Intrinsically Sparse Long Short-Term Memory Networks
Shiwei Liu
D. Mocanu
Mykola Pechenizkiy
14
9
0
26 Jan 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,888
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
177
1,185
0
30 Nov 2014
Previous
12