ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.05467
  4. Cited By
Pruning neural networks without any data by iteratively conserving
  synaptic flow

Pruning neural networks without any data by iteratively conserving synaptic flow

9 June 2020
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
ArXivPDFHTML

Papers citing "Pruning neural networks without any data by iteratively conserving synaptic flow"

50 / 98 papers shown
Title
GreenFactory: Ensembling Zero-Cost Proxies to Estimate Performance of Neural Networks
GreenFactory: Ensembling Zero-Cost Proxies to Estimate Performance of Neural Networks
Gabriel Cortes
Nuno Lourenço
Paolo Romano
Penousal Machado
UQCV
FedML
37
0
0
14 May 2025
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
49
0
0
29 Apr 2025
RBFleX-NAS: Training-Free Neural Architecture Search Using Radial Basis Function Kernel and Hyperparameter Detection
RBFleX-NAS: Training-Free Neural Architecture Search Using Radial Basis Function Kernel and Hyperparameter Detection
Tomomasa Yamasaki
Zhehui Wang
Tao Luo
Niangjun Chen
Bo Wang
32
0
0
26 Mar 2025
Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation
Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation
P. Rumiantsev
Mark Coates
55
0
0
27 Feb 2025
NEAR: A Training-Free Pre-Estimator of Machine Learning Model Performance
NEAR: A Training-Free Pre-Estimator of Machine Learning Model Performance
Raphael T. Husistein
Markus Reiher
Marco Eckhoff
142
1
0
20 Feb 2025
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
Boqian Wu
Q. Xiao
Shiwei Liu
Lu Yin
Mykola Pechenizkiy
D. Mocanu
M. V. Keulen
Elena Mocanu
MedIm
53
4
0
20 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
113
0
0
04 Feb 2025
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Andy Li
A. Durrant
Milan Markovic
Lu Yin
Georgios Leontidis
Tianlong Chen
Lu Yin
Georgios Leontidis
75
0
0
20 Nov 2024
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition
Stephen Zhang
V. Papyan
VLM
48
1
0
20 Sep 2024
NASH: Neural Architecture and Accelerator Search for
  Multiplication-Reduced Hybrid Models
NASH: Neural Architecture and Accelerator Search for Multiplication-Reduced Hybrid Models
Yang Xu
Huihong Shi
Zhongfeng Wang
39
0
0
07 Sep 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
47
3
0
19 Aug 2024
Network Fission Ensembles for Low-Cost Self-Ensembles
Network Fission Ensembles for Low-Cost Self-Ensembles
Hojung Lee
Jong-Seok Lee
UQCV
58
0
0
05 Aug 2024
Efficient Multi-Objective Neural Architecture Search via Pareto
  Dominance-based Novelty Search
Efficient Multi-Objective Neural Architecture Search via Pareto Dominance-based Novelty Search
An Vo
Ngoc Hoang Luong
33
0
0
30 Jul 2024
A Generic Layer Pruning Method for Signal Modulation Recognition Deep
  Learning Models
A Generic Layer Pruning Method for Signal Modulation Recognition Deep Learning Models
Yao Lu
Yutao Zhu
Yuqi Li
Dongwei Xu
Yun Lin
Qi Xuan
Xiaoniu Yang
39
5
0
12 Jun 2024
Dual sparse training framework: inducing activation map sparsity via
  Transformed $\ell1$ regularization
Dual sparse training framework: inducing activation map sparsity via Transformed ℓ1\ell1ℓ1 regularization
Xiaolong Yu
Cong Tian
44
0
0
30 May 2024
Survival of the Fittest Representation: A Case Study with Modular
  Addition
Survival of the Fittest Representation: A Case Study with Modular Addition
Xiaoman Delores Ding
Zifan Carl Guo
Eric J. Michaud
Ziming Liu
Max Tegmark
48
3
0
27 May 2024
Retrievable Domain-Sensitive Feature Memory for Multi-Domain Recommendation
Retrievable Domain-Sensitive Feature Memory for Multi-Domain Recommendation
Yuang Zhao
Zhaocheng Du
Qinglin Jia
Linxuan Zhang
Zhenhua Dong
Ruiming Tang
32
2
0
21 May 2024
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at
  Initialization
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at Initialization
Bailey J. Eccles
Leon Wong
Blesson Varghese
33
2
0
22 Apr 2024
Anytime Neural Architecture Search on Tabular Data
Anytime Neural Architecture Search on Tabular Data
Naili Xing
Shaofeng Cai
Zhaojing Luo
Bengchin Ooi
Jian Pei
34
1
0
15 Mar 2024
Robustifying and Boosting Training-Free Neural Architecture Search
Robustifying and Boosting Training-Free Neural Architecture Search
Zhenfeng He
Yao Shu
Zhongxiang Dai
K. H. Low
40
1
0
12 Mar 2024
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for
  Large Language Models
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models
Amit Dhurandhar
Tejaswini Pedapati
Ronny Luss
Soham Dan
Aurélie C. Lozano
Payel Das
Georgios Kollias
22
3
0
28 Feb 2024
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Mike Heddes
Narayan Srinivasa
T. Givargis
Alexandru Nicolau
91
0
0
12 Jan 2024
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
52
10
0
23 Dec 2023
Adaptive Model Pruning and Personalization for Federated Learning over
  Wireless Networks
Adaptive Model Pruning and Personalization for Federated Learning over Wireless Networks
Xiaonan Liu
T. Ratnarajah
M. Sellathurai
Yonina C. Eldar
32
4
0
04 Sep 2023
An Evaluation of Zero-Cost Proxies -- from Neural Architecture
  Performance to Model Robustness
An Evaluation of Zero-Cost Proxies -- from Neural Architecture Performance to Model Robustness
Jovita Lukasik
Michael Moeller
M. Keuper
27
1
0
18 Jul 2023
Biologically-Motivated Learning Model for Instructed Visual Processing
Biologically-Motivated Learning Model for Instructed Visual Processing
R. Abel
S. Ullman
20
0
0
04 Jun 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
34
19
0
06 Apr 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training
  Efficiency
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
28
3
0
21 Mar 2023
Automatic Attention Pruning: Improving and Automating Model Pruning
  using Attentions
Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions
Kaiqi Zhao
Animesh Jain
Ming Zhao
26
9
0
14 Mar 2023
Efficient Transformer-based 3D Object Detection with Dynamic Token
  Halting
Efficient Transformer-based 3D Object Detection with Dynamic Token Halting
Mao Ye
Gregory P. Meyer
Yuning Chai
Qiang Liu
32
8
0
09 Mar 2023
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu
Tianlong Chen
Zhenyu (Allen) Zhang
Xuxi Chen
Tianjin Huang
Ajay Jaiswal
Zhangyang Wang
29
29
0
03 Mar 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
43
9
0
28 Feb 2023
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Benjamin Vandersmissen
José Oramas
23
1
0
22 Feb 2023
Synaptic Stripping: How Pruning Can Bring Dead Neurons Back To Life
Synaptic Stripping: How Pruning Can Bring Dead Neurons Back To Life
Tim Whitaker
L. D. Whitley
CVBM
17
2
0
11 Feb 2023
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks
  with Quantum Computation
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
H. Yamasaki
Sathyawageeswar Subramanian
Satoshi Hayakawa
Sho Sonoda
MLT
30
4
0
27 Jan 2023
ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients
ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients
Guihong Li
Yuedong Yang
Kartikeya Bhardwaj
R. Marculescu
36
60
0
26 Jan 2023
Getting Away with More Network Pruning: From Sparsity to Geometry and
  Linear Regions
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions
Junyang Cai
Khai-Nguyen Nguyen
Nishant Shrestha
Aidan Good
Ruisen Tu
Xin Yu
Shandian Zhe
Thiago Serra
MLT
37
7
0
19 Jan 2023
COLT: Cyclic Overlapping Lottery Tickets for Faster Pruning of Convolutional Neural Networks
COLT: Cyclic Overlapping Lottery Tickets for Faster Pruning of Convolutional Neural Networks
Md. Ismail Hossain
Mohammed Rakib
M. M. L. Elahi
Nabeel Mohammed
Shafin Rahman
21
1
0
24 Dec 2022
DAS: Neural Architecture Search via Distinguishing Activation Score
DAS: Neural Architecture Search via Distinguishing Activation Score
Yuqiao Liu
Haipeng Li
Yanan Sun
Shuaicheng Liu
33
1
0
23 Dec 2022
Dynamic Sparse Network for Time Series Classification: Learning What to
  "see''
Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Qiao Xiao
Boqian Wu
Yu Zhang
Shiwei Liu
Mykola Pechenizkiy
Elena Mocanu
D. Mocanu
AI4TS
38
28
0
19 Dec 2022
Can We Find Strong Lottery Tickets in Generative Models?
Can We Find Strong Lottery Tickets in Generative Models?
Sangyeop Yeo
Yoojin Jang
Jy-yong Sohn
Dongyoon Han
Jaejun Yoo
20
6
0
16 Dec 2022
Efficient Stein Variational Inference for Reliable Distribution-lossless
  Network Pruning
Efficient Stein Variational Inference for Reliable Distribution-lossless Network Pruning
Yingchun Wang
Song Guo
Jingcai Guo
Weizhan Zhang
Yi Tian Xu
Jiewei Zhang
Yi Liu
21
17
0
07 Dec 2022
Dynamic Sparse Training via Balancing the Exploration-Exploitation
  Trade-off
Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off
Shaoyi Huang
Bowen Lei
Dongkuan Xu
Hongwu Peng
Yue Sun
Mimi Xie
Caiwen Ding
21
19
0
30 Nov 2022
SNIPER Training: Single-Shot Sparse Training for Text-to-Speech
SNIPER Training: Single-Shot Sparse Training for Text-to-Speech
Perry Lam
Huayun Zhang
Nancy F. Chen
Berrak Sisman
Dorien Herremans
VLM
27
0
0
14 Nov 2022
Partial Binarization of Neural Networks for Budget-Aware Efficient
  Learning
Partial Binarization of Neural Networks for Budget-Aware Efficient Learning
Udbhav Bamba
Neeraj Anand
Saksham Aggarwal
Dilip K Prasad
D. K. Gupta
MQ
26
0
0
12 Nov 2022
Towards Theoretically Inspired Neural Initialization Optimization
Towards Theoretically Inspired Neural Initialization Optimization
Yibo Yang
Hong Wang
Haobo Yuan
Zhouchen Lin
21
9
0
12 Oct 2022
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning
  Ticket's Mask?
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
Mansheej Paul
F. Chen
Brett W. Larsen
Jonathan Frankle
Surya Ganguli
Gintare Karolina Dziugaite
UQCV
25
38
0
06 Oct 2022
Siamese-NAS: Using Trained Samples Efficiently to Find Lightweight
  Neural Architecture by Prior Knowledge
Siamese-NAS: Using Trained Samples Efficiently to Find Lightweight Neural Architecture by Prior Knowledge
Yumeng Zhang
J. Hsieh
Chun-Chieh Lee
Kuo-Chin Fan
33
0
0
02 Oct 2022
Towards Sparsification of Graph Neural Networks
Towards Sparsification of Graph Neural Networks
Hongwu Peng
Deniz Gurevin
Shaoyi Huang
Tong Geng
Weiwen Jiang
O. Khan
Caiwen Ding
GNN
30
24
0
11 Sep 2022
Complexity-Driven CNN Compression for Resource-constrained Edge AI
Complexity-Driven CNN Compression for Resource-constrained Edge AI
Muhammad Zawish
Steven Davy
L. Abraham
33
16
0
26 Aug 2022
12
Next