ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.05136
  4. Cited By
Deep Rewiring: Training very sparse deep networks

Deep Rewiring: Training very sparse deep networks

14 November 2017
G. Bellec
David Kappel
Wolfgang Maass
R. Legenstein
    BDL
ArXivPDFHTML

Papers citing "Deep Rewiring: Training very sparse deep networks"

50 / 52 papers shown
Title
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Ibne Farabi Shihab
Sanjeda Akter
Anuj Sharma
Mamba
48
0
0
13 May 2025
Sparse-to-Sparse Training of Diffusion Models
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
86
0
0
30 Apr 2025
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Yingtao Zhang
Jialin Zhao
Wenjing Wu
Ziheng Liao
Umberto Michieli
C. Cannistraci
51
0
0
31 Jan 2025
Symmetric Pruning of Large Language Models
Symmetric Pruning of Large Language Models
Kai Yi
Peter Richtárik
AAML
VLM
57
0
0
31 Jan 2025
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Cabrel Teguemne Fokam
Khaleelulla Khan Nazeer
Lukas König
David Kappel
Anand Subramoney
28
0
0
08 Oct 2024
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
D. Mocanu
Elena Mocanu
OOD
3DH
52
0
0
03 Oct 2024
Boosting Robustness in Preference-Based Reinforcement Learning with
  Dynamic Sparsity
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity
Calarina Muslimani
Bram Grooten
Deepak Ranganatha Sastry Mamillapalli
Mykola Pechenizkiy
D. Mocanu
M. E. Taylor
43
0
0
10 Jun 2024
Neural Network Compression for Reinforcement Learning Tasks
Neural Network Compression for Reinforcement Learning Tasks
Dmitry A. Ivanov
D. Larionov
Oleg V. Maslennikov
V. Voevodin
OffRL
AI4CE
43
0
0
13 May 2024
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real
  World
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Bani Mallick
UQCV
36
0
0
29 Mar 2024
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Mike Heddes
Narayan Srinivasa
T. Givargis
Alexandru Nicolau
91
0
0
12 Jan 2024
Magnitude Attention-based Dynamic Pruning
Magnitude Attention-based Dynamic Pruning
Jihye Back
Namhyuk Ahn
Jang-Hyun Kim
25
2
0
08 Jun 2023
Towards Memory-Efficient Training for Extremely Large Output Spaces --
  Learning with 500k Labels on a Single Commodity GPU
Towards Memory-Efficient Training for Extremely Large Output Spaces -- Learning with 500k Labels on a Single Commodity GPU
Erik Schultheis
Rohit Babbar
11
4
0
06 Jun 2023
Sparsified Model Zoo Twins: Investigating Populations of Sparsified
  Neural Network Models
Sparsified Model Zoo Twins: Investigating Populations of Sparsified Neural Network Models
D. Honegger
Konstantin Schurholt
Damian Borth
22
4
0
26 Apr 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
28
19
0
06 Apr 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
32
9
0
28 Feb 2023
Learnable Heterogeneous Convolution: Learning both topology and strength
Learnable Heterogeneous Convolution: Learning both topology and strength
Rongzhen Zhao
Zhenzhi Wu
Qikun Zhang
21
6
0
13 Jan 2023
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient
  Correction
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Shuren He
Bani Mallick
27
6
0
09 Jan 2023
Dynamic Sparse Training via Balancing the Exploration-Exploitation
  Trade-off
Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off
Shaoyi Huang
Bowen Lei
Dongkuan Xu
Hongwu Peng
Yue Sun
Mimi Xie
Caiwen Ding
21
19
0
30 Nov 2022
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware
  Training
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training
Yunshan Zhong
Gongrui Nan
Yu-xin Zhang
Fei Chao
Rongrong Ji
MQ
18
3
0
12 Nov 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
18
3
0
28 Oct 2022
Gradient-based Weight Density Balancing for Robust Dynamic Sparse
  Training
Gradient-based Weight Density Balancing for Robust Dynamic Sparse Training
Mathias Parger
Alexander Ertl
Paul Eibensteiner
J. H. Mueller
Martin Winter
M. Steinberger
29
0
0
25 Oct 2022
On the optimization and pruning for Bayesian deep learning
On the optimization and pruning for Bayesian deep learning
X. Ke
Yanan Fan
BDL
UQCV
27
1
0
24 Oct 2022
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation
  Approach
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach
Peng Mi
Li Shen
Tianhe Ren
Yiyi Zhou
Xiaoshuai Sun
Rongrong Ji
Dacheng Tao
AAML
27
69
0
11 Oct 2022
Spartan: Differentiable Sparsity via Regularized Transportation
Spartan: Differentiable Sparsity via Regularized Transportation
Kai Sheng Tai
Taipeng Tian
Ser-Nam Lim
17
11
0
27 May 2022
On the Convergence of Heterogeneous Federated Learning with Arbitrary
  Adaptive Online Model Pruning
On the Convergence of Heterogeneous Federated Learning with Arbitrary Adaptive Online Model Pruning
Hanhan Zhou
Tian-Shing Lan
Guru Venkataramani
Wenbo Ding
FedML
24
6
0
27 Jan 2022
Achieving Personalized Federated Learning with Sparse Local Models
Achieving Personalized Federated Learning with Sparse Local Models
Tiansheng Huang
Shiwei Liu
Li Shen
Fengxiang He
Weiwei Lin
Dacheng Tao
FedML
22
43
0
27 Jan 2022
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the
  Edge
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Geng Yuan
Xiaolong Ma
Wei Niu
Zhengang Li
Zhenglun Kong
...
Minghai Qin
Bin Ren
Yanzhi Wang
Sijia Liu
Xue Lin
15
89
0
26 Oct 2021
Powerpropagation: A sparsity inducing weight reparameterisation
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
87
54
0
01 Oct 2021
Architecture Aware Latency Constrained Sparse Neural Networks
Architecture Aware Latency Constrained Sparse Neural Networks
Tianli Zhao
Qinghao Hu
Xiangyu He
Weixiang Xu
Jiaxing Wang
Cong Leng
Jian Cheng
31
0
0
01 Sep 2021
A Method for Medical Data Analysis Using the LogNNet for Clinical
  Decision Support Systems and Edge Computing in Healthcare
A Method for Medical Data Analysis Using the LogNNet for Clinical Decision Support Systems and Edge Computing in Healthcare
Andrei Velichko
16
16
0
05 Aug 2021
A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic
  Hardware
A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware
Philipp Plank
A. Rao
Andreas Wild
Wolfgang Maass
10
102
0
08 Jul 2021
Deep Ensembling with No Overhead for either Training or Testing: The
  All-Round Blessings of Dynamic Sparsity
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
OOD
28
49
0
28 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
34
111
0
19 Jun 2021
Effective Sparsification of Neural Networks with Global Sparsity
  Constraint
Effective Sparsification of Neural Networks with Global Sparsity Constraint
Xiao Zhou
Weizhong Zhang
Hang Xu
Tong Zhang
16
61
0
03 May 2021
Lottery Jackpots Exist in Pre-trained Models
Lottery Jackpots Exist in Pre-trained Models
Yu-xin Zhang
Mingbao Lin
Yan Wang
Fei Chao
Rongrong Ji
30
15
0
18 Apr 2021
Recent Advances on Neural Network Pruning at Initialization
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
31
64
0
11 Mar 2021
Sparse Training Theory for Scalable and Efficient Agents
Sparse Training Theory for Scalable and Efficient Agents
D. Mocanu
Elena Mocanu
T. Pinto
Selima Curci
Phuong H. Nguyen
M. Gibescu
D. Ernst
Z. Vale
45
17
0
02 Mar 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
44
110
0
16 Feb 2021
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Aojun Zhou
Yukun Ma
Junnan Zhu
Jianbo Liu
Zhijie Zhang
Kun Yuan
Wenxiu Sun
Hongsheng Li
50
239
0
08 Feb 2021
Quick and Robust Feature Selection: the Strength of Energy-efficient
  Sparse Training for Autoencoders
Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders
Zahra Atashgahi
Ghada Sokar
T. Lee
Elena Mocanu
D. Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
11
37
0
01 Dec 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
26
25
0
20 Nov 2020
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Lorenzo Valerio
F. M. Nardini
A. Passarella
R. Perego
12
12
0
17 Nov 2020
Are wider nets better given the same number of parameters?
Are wider nets better given the same number of parameters?
A. Golubeva
Behnam Neyshabur
Guy Gur-Ari
16
44
0
27 Oct 2020
Brain-Inspired Learning on Neuromorphic Substrates
Brain-Inspired Learning on Neuromorphic Substrates
Friedemann Zenke
Emre Neftci
31
87
0
22 Oct 2020
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip H. S. Torr
Grégory Rogez
P. Dokania
21
95
0
16 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
  Trainable Masked Layers
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
9
119
0
14 May 2020
Structural plasticity on an accelerated analog neuromorphic hardware
  system
Structural plasticity on an accelerated analog neuromorphic hardware system
Sebastian Billaudelle
Benjamin Cramer
Mihai A. Petrovici
Korbinian Schreiber
David Kappel
Johannes Schemmel
K. Meier
18
20
0
27 Dec 2019
Spiking neural networks trained with backpropagation for low power
  neuromorphic implementation of voice activity detection
Spiking neural networks trained with backpropagation for low power neuromorphic implementation of voice activity detection
Flavio Martinelli
Giorgia Dellaferrera
Pablo Mainar
Milos Cernak
11
29
0
22 Oct 2019
Sparse Networks from Scratch: Faster Training without Losing Performance
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
20
333
0
10 Jul 2019
12
Next