ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.05136
  4. Cited By
Deep Rewiring: Training very sparse deep networks
v1v2v3v4v5 (latest)

Deep Rewiring: Training very sparse deep networks

14 November 2017
G. Bellec
David Kappel
Wolfgang Maass
Robert Legenstein
    BDL
ArXiv (abs)PDFHTML

Papers citing "Deep Rewiring: Training very sparse deep networks"

50 / 170 papers shown
Title
Cannistraci-Hebb Training on Ultra-Sparse Spiking Neural Networks
Cannistraci-Hebb Training on Ultra-Sparse Spiking Neural Networks
Yuan Hua
Jilin Zhang
Yingtao Zhang
Wenqi Gu
Leyi You
Baobo Xiong
C. Cannistraci
Hong Chen
73
0
0
05 Nov 2025
Space as Time Through Neuron Position Learning
Balázs Mészáros
James C. Knight
Danyal Akarca
Thomas Nowotny
60
0
0
03 Nov 2025
SpikeFit: Towards Optimal Deployment of Spiking Networks on Neuromorphic Hardware
SpikeFit: Towards Optimal Deployment of Spiking Networks on Neuromorphic Hardware
Ivan Kartashov
M. Pushkareva
Iakov Karandashev
147
1
0
17 Oct 2025
Neuro-inspired Ensemble-to-Ensemble Communication Primitives for Sparse and Efficient ANNs
Neuro-inspired Ensemble-to-Ensemble Communication Primitives for Sparse and Efficient ANNs
Orestis Konstantaropoulos
S. Smirnakis
M. Papadopouli
148
0
0
19 Aug 2025
ONG: One-Shot NMF-based Gradient Masking for Efficient Model Sparsification
ONG: One-Shot NMF-based Gradient Masking for Efficient Model Sparsification
S. Behera
Yamuna Prasad
74
0
0
18 Aug 2025
A Topological Improvement of the Overall Performance of Sparse Evolutionary Training: Motif-Based Structural Optimization of Sparse MLPs Project
A Topological Improvement of the Overall Performance of Sparse Evolutionary Training: Motif-Based Structural Optimization of Sparse MLPs Project
Xiaotian Chen
Hongyun Liu
Seyed Sahand Mohammadi Ziabari
150
0
0
10 Jun 2025
Hyperpruning: Efficient Search through Pruned Variants of Recurrent Neural Networks Leveraging Lyapunov Spectrum
Hyperpruning: Efficient Search through Pruned Variants of Recurrent Neural Networks Leveraging Lyapunov Spectrum
Caleb Zheng
Eli Shlizerman
150
0
0
09 Jun 2025
NeuroTrails: Training with Dynamic Sparse Heads as the Key to Effective Ensembling
NeuroTrails: Training with Dynamic Sparse Heads as the Key to Effective Ensembling
Bram Grooten
Farid Hasanov
Chenxiang Zhang
Q. Xiao
Boqian Wu
...
Shiwei Liu
L. Yin
Elena Mocanu
Mykola Pechenizkiy
Decebal Constantin Mocanu
264
0
0
23 May 2025
Balanced and Elastic End-to-end Training of Dynamic LLMs
Balanced and Elastic End-to-end Training of Dynamic LLMs
Mohamed Wahib
Muhammed Abdullah Soyturk
Didem Unat
MoE
294
0
0
20 May 2025
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Ibne Farabi Shihab
Sanjeda Akter
Anuj Sharma
Mamba
394
3
0
13 May 2025
Sparse-to-Sparse Training of Diffusion Models
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
394
1
0
30 Apr 2025
The Neural Pruning Law Hypothesis
The Neural Pruning Law Hypothesis
Eugen Barbulescu
Antonio Alexoaie
Lucian Busoniu
315
0
0
06 Apr 2025
Symmetric Pruning of Large Language Models
Symmetric Pruning of Large Language Models
Kai Yi
Peter Richtárik
AAMLVLM
284
2
0
31 Jan 2025
Brain network science modelling of sparse neural networks enables Transformers and LLMs to perform as fully connected
Brain network science modelling of sparse neural networks enables Transformers and LLMs to perform as fully connected
Yingtao Zhang
Diego Cerretti
Jialin Zhao
Wenjing Wu
Ziheng Liao
Umberto Michieli
C. Cannistraci
532
1
0
31 Jan 2025
Expanding Sparse Tuning for Low Memory Usage
Expanding Sparse Tuning for Low Memory UsageNeural Information Processing Systems (NeurIPS), 2024
Shufan Shen
Junshu Sun
Xiangyang Ji
Qingming Huang
Shuhui Wang
305
9
0
04 Nov 2024
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Cabrel Teguemne Fokam
Khaleelulla Khan Nazeer
Lukas König
David Kappel
Anand Subramoney
291
1
0
08 Oct 2024
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption RobustnessInternational Conference on Learning Representations (ICLR), 2024
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
Decebal Constantin Mocanu
Elena Mocanu
OOD3DH
474
6
0
03 Oct 2024
Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse
  Training
Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse TrainingNeural Information Processing Systems (NeurIPS), 2024
Pihe Hu
Shaolong Li
Zhuoran Li
L. Pan
Longbo Huang
170
1
0
28 Sep 2024
Nerva: a Truly Sparse Implementation of Neural Networks
Nerva: a Truly Sparse Implementation of Neural Networks
Wieger Wesselink
Bram Grooten
Qiao Xiao
Cássio Machado de Campos
Mykola Pechenizkiy
153
2
0
24 Jul 2024
Sparsest Models Elude Pruning: An Exposé of Pruning's Current
  Capabilities
Sparsest Models Elude Pruning: An Exposé of Pruning's Current Capabilities
Stephen Zhang
Vardan Papyan
221
0
0
04 Jul 2024
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic SparsityAdaptive Agents and Multi-Agent Systems (AAMAS), 2024
Calarina Muslimani
Bram Grooten
Deepak Ranganatha Sastry Mamillapalli
Mykola Pechenizkiy
Decebal Constantin Mocanu
Matthew E. Taylor
337
0
0
10 Jun 2024
Towards Efficient Deep Spiking Neural Networks Construction with Spiking
  Activity based Pruning
Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning
Yaxin Li
Qi Xu
Jiangrong Shen
Hongming Xu
Long Chen
Gang Pan
360
15
0
03 Jun 2024
Diverse Subset Selection via Norm-Based Sampling and Orthogonality
Diverse Subset Selection via Norm-Based Sampling and Orthogonality
Noga Bar
Raja Giryes
CVBM
330
1
0
03 Jun 2024
Enhancing Adversarial Robustness in SNNs with Sparse Gradients
Enhancing Adversarial Robustness in SNNs with Sparse Gradients
Yujia Liu
Tong Bu
Jianhao Ding
Zecheng Hao
Tiejun Huang
Zhaofei Yu
AAML
190
13
0
30 May 2024
Sparse maximal update parameterization: A holistic approach to sparse
  training dynamics
Sparse maximal update parameterization: A holistic approach to sparse training dynamics
Nolan Dey
Shane Bergsma
Joel Hestness
224
7
0
24 May 2024
Neural Network Compression for Reinforcement Learning Tasks
Neural Network Compression for Reinforcement Learning TasksScientific Reports (Sci Rep), 2024
Dmitry A. Ivanov
D. Larionov
Oleg V. Maslennikov
V. Voevodin
OffRLAI4CE
225
8
0
13 May 2024
Weight Sparsity Complements Activity Sparsity in Neuromorphic Language
  Models
Weight Sparsity Complements Activity Sparsity in Neuromorphic Language Models
Rishav Mukherji
Mark Schöne
Khaleelulla Khan Nazeer
Christian Mayr
David Kappel
Anand Subramoney
279
3
0
01 May 2024
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real
  World
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Bani Mallick
UQCV
256
1
0
29 Mar 2024
LSK3DNet: Towards Effective and Efficient 3D Perception with Large
  Sparse Kernels
LSK3DNet: Towards Effective and Efficient 3D Perception with Large Sparse KernelsComputer Vision and Pattern Recognition (CVPR), 2024
Tuo Feng
Wenguan Wang
Fan Ma
Yi Yang
3DV
197
17
0
22 Mar 2024
LNPT: Label-free Network Pruning and Training
LNPT: Label-free Network Pruning and TrainingIEEE International Joint Conference on Neural Network (IJCNN), 2024
Jinying Xiao
Ping Li
Zhe Tang
Jie Nie
193
3
0
19 Mar 2024
Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural
  Networks Using the Marginal Likelihood
Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood
Rayen Dhahri
Alexander Immer
Bertrand Charpentier
Stephan Günnemann
Vincent Fortuin
BDLUQCV
187
7
0
25 Feb 2024
Progressive Gradient Flow for Robust N:M Sparsity Training in
  Transformers
Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Abhimanyu Bambhaniya
Amir Yazdanbakhsh
Suvinay Subramanian
Sheng-Chun Kao
Shivani Agrawal
Utku Evci
Tushar Krishna
281
23
0
07 Feb 2024
EPSD: Early Pruning with Self-Distillation for Efficient Model
  Compression
EPSD: Early Pruning with Self-Distillation for Efficient Model Compression
Dong Chen
Ning Liu
Yichen Zhu
Zhengping Che
Rui Ma
Fachao Zhang
Xiaofeng Mou
Yi Chang
Jian Tang
218
7
0
31 Jan 2024
ELRT: Efficient Low-Rank Training for Compact Convolutional Neural
  Networks
ELRT: Efficient Low-Rank Training for Compact Convolutional Neural Networks
Yang Sui
Miao Yin
Yu Gong
Jinqi Xiao
Huy Phan
Bo Yuan
219
9
0
18 Jan 2024
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Mike Heddes
Narayan Srinivasa
T. Givargis
Alexandru Nicolau
467
1
0
12 Jan 2024
CRAFT: Contextual Re-Activation of Filters for face recognition Training
CRAFT: Contextual Re-Activation of Filters for face recognition TrainingIEEE International Conference on Automatic Face & Gesture Recognition (FG), 2023
Aman Bhatta
Domingo Mery
Haiyu Wu
Kevin W. Bowyer
CVBM
202
2
0
29 Nov 2023
Towards Higher Ranks via Adversarial Weight Pruning
Towards Higher Ranks via Adversarial Weight PruningNeural Information Processing Systems (NeurIPS), 2023
Yuchuan Tian
Hanting Chen
Tianyu Guo
Chao Xu
Yunhe Wang
203
4
0
29 Nov 2023
Neural Network Pruning by Gradient Descent
Neural Network Pruning by Gradient Descent
Zhang Zhang
Ruyi Tao
Jiang Zhang
183
6
0
21 Nov 2023
Activity Sparsity Complements Weight Sparsity for Efficient RNN
  Inference
Activity Sparsity Complements Weight Sparsity for Efficient RNN Inference
Rishav Mukherji
Mark Schöne
Khaleelulla Khan Nazeer
Christian Mayr
Anand Subramoney
262
2
0
13 Nov 2023
In defense of parameter sharing for model-compression
In defense of parameter sharing for model-compressionInternational Conference on Learning Representations (ICLR), 2023
Aditya Desai
Anshumali Shrivastava
125
5
0
17 Oct 2023
Every Parameter Matters: Ensuring the Convergence of Federated Learning
  with Dynamic Heterogeneous Models Reduction
Every Parameter Matters: Ensuring the Convergence of Federated Learning with Dynamic Heterogeneous Models ReductionNeural Information Processing Systems (NeurIPS), 2023
Hanhan Zhou
Tian-Shing Lan
Guru Venkataramani
Wenbo Ding
257
44
0
12 Oct 2023
LEMON: Lossless model expansion
LEMON: Lossless model expansionInternational Conference on Learning Representations (ICLR), 2023
Yite Wang
Jiahao Su
Hanlin Lu
Cong Xie
Tianyi Liu
Jianbo Yuan
Yanghua Peng
Tian Ding
Hongxia Yang
194
20
0
12 Oct 2023
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep
  Neural Networks
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural NetworksIEEE International Conference on Computer Vision (ICCV), 2023
Kaixin Xu
Zhe Wang
Xue Geng
Jie Lin
Ruibing Jin
Xiaoli Li
Weisi Lin
117
19
0
21 Aug 2023
A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual
  Learning
A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual LearningIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
Zhenyi Wang
Enneng Yang
Li Shen
Heng-Chiao Huang
KELMMU
237
81
0
16 Jul 2023
Systematic Investigation of Sparse Perturbed Sharpness-Aware
  Minimization Optimizer
Systematic Investigation of Sparse Perturbed Sharpness-Aware Minimization OptimizerIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
Peng Mi
Li Shen
Tianhe Ren
Weihao Ye
Tianshuo Xu
Xiaoshuai Sun
Tongliang Liu
Rongrong Ji
Dacheng Tao
AAML
210
2
0
30 Jun 2023
Maintaining Plasticity in Deep Continual Learning
Maintaining Plasticity in Deep Continual Learning
Shibhansh Dohare
J. F. Hernandez-Garcia
Parash Rahman
A. Rupam Mahmood
Richard S. Sutton
KELMCLL
349
36
0
23 Jun 2023
Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse
  Training
Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse TrainingNeural Information Processing Systems (NeurIPS), 2023
A. Nowak
Bram Grooten
Decebal Constantin Mocanu
Jacek Tabor
199
14
0
21 Jun 2023
Magnitude Attention-based Dynamic Pruning
Magnitude Attention-based Dynamic PruningExpert systems with applications (ESWA), 2023
Jihye Back
Namhyuk Ahn
Jang-Hyun Kim
148
6
0
08 Jun 2023
Towards Memory-Efficient Training for Extremely Large Output Spaces --
  Learning with 500k Labels on a Single Commodity GPU
Towards Memory-Efficient Training for Extremely Large Output Spaces -- Learning with 500k Labels on a Single Commodity GPU
Erik Schultheis
Rohit Babbar
174
5
0
06 Jun 2023
Dynamic Sparsity Is Channel-Level Sparsity Learner
Dynamic Sparsity Is Channel-Level Sparsity LearnerNeural Information Processing Systems (NeurIPS), 2023
Lu Yin
Gen Li
Meng Fang
Lijuan Shen
Tianjin Huang
Zinan Lin
Vlado Menkovski
Xiaolong Ma
Mykola Pechenizkiy
Shiwei Liu
251
22
0
30 May 2023
1234
Next