Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.04010
Cited By
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
8 February 2021
Aojun Zhou
Yukun Ma
Junnan Zhu
Jianbo Liu
Zhijie Zhang
Kun Yuan
Wenxiu Sun
Hongsheng Li
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch"
45 / 145 papers shown
Title
Training Recipe for N:M Structured Sparsity with Decaying Pruning Mask
Sheng-Chun Kao
Amir Yazdanbakhsh
Suvinay Subramanian
Shivani Agrawal
Utku Evci
T. Krishna
48
12
0
15 Sep 2022
Optimizing Connectivity through Network Gradients for Restricted Boltzmann Machines
A. C. N. D. Oliveira
Daniel R. Figueiredo
22
0
0
14 Sep 2022
Lottery Aware Sparsity Hunting: Enabling Federated Learning on Resource-Limited Edge
Sara Babakniya
Souvik Kundu
Saurav Prakash
Yue Niu
Salman Avestimehr
FedML
18
9
0
27 Aug 2022
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning
Elias Frantar
Sidak Pal Singh
Dan Alistarh
MQ
15
213
0
24 Aug 2022
An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers
Chao Fang
Aojun Zhou
Zhongfeng Wang
MoE
25
53
0
12 Aug 2022
CrAM: A Compression-Aware Minimizer
Alexandra Peste
Adrian Vladu
Eldar Kurtic
Christoph H. Lampert
Dan Alistarh
24
8
0
28 Jul 2022
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks
Chuang Liu
Xueqi Ma
Yinbing Zhan
Liang Ding
Dapeng Tao
Bo Du
Wenbin Hu
Danilo P. Mandic
32
28
0
18 Jul 2022
SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning
Zihao Ye
Ruihang Lai
Junru Shao
Tianqi Chen
Luis Ceze
76
91
0
11 Jul 2022
DRESS: Dynamic REal-time Sparse Subnets
Zhongnan Qu
Syed Shakib Sarwar
Xin Dong
Yuecheng Li
Huseyin Ekin Sumbul
B. D. Salvo
3DH
16
1
0
01 Jul 2022
Compressing Pre-trained Transformers via Low-Bit NxM Sparsity for Natural Language Understanding
Connor Holmes
Minjia Zhang
Yuxiong He
Bo Wu
9
3
0
30 Jun 2022
Winning the Lottery Ahead of Time: Efficient Early Network Pruning
John Rachwan
Daniel Zügner
Bertrand Charpentier
Simon Geisler
Morgane Ayle
Stephan Günnemann
17
24
0
21 Jun 2022
Deep Neural Network Pruning for Nuclei Instance Segmentation in Hematoxylin & Eosin-Stained Histological Images
A. Mahbod
R. Entezari
Isabella Ellinger
O. Saukh
9
8
0
15 Jun 2022
Learning Best Combination for Efficient N:M Sparsity
Yu-xin Zhang
Mingbao Lin
Zhihang Lin
Yiting Luo
Ke Li
Fei Chao
Yongjian Wu
Rongrong Ji
24
49
0
14 Jun 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
Paul Wimmer
Jens Mehnert
A. P. Condurache
DD
34
20
0
17 May 2022
Attentive Fine-Grained Structured Sparsity for Image Restoration
Junghun Oh
Heewon Kim
Seungjun Nah
Chee Hong
Jonghyun Choi
Kyoung Mu Lee
16
18
0
26 Apr 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
28
11
0
06 Apr 2022
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution
Shwai He
Chenbo Jiang
Daize Dong
Liang Ding
32
5
0
05 Apr 2022
Minimum Variance Unbiased N:M Sparsity for the Neural Gradients
Brian Chmiel
Itay Hubara
Ron Banner
Daniel Soudry
14
10
0
21 Mar 2022
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
Shiwei Liu
Yuesong Tian
Tianlong Chen
Li Shen
34
8
0
05 Mar 2022
Dynamic N:M Fine-grained Structured Sparse Attention Mechanism
Zhaodong Chen
Yuying Quan
Zheng Qu
L. Liu
Yufei Ding
Yuan Xie
28
22
0
28 Feb 2022
Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters
Mingbao Lin
Liujuan Cao
Yu-xin Zhang
Ling Shao
Chia-Wen Lin
Rongrong Ji
14
51
0
15 Feb 2022
Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets
Tianlong Chen
Xuxi Chen
Xiaolong Ma
Yanzhi Wang
Zhangyang Wang
11
34
0
09 Feb 2022
Accelerating DNN Training with Structured Data Gradient Pruning
Bradley McDanel
Helia Dinh
J. Magallanes
4
7
0
01 Feb 2022
SPDY: Accurate Pruning with Speedup Guarantees
Elias Frantar
Dan Alistarh
28
33
0
31 Jan 2022
OptG: Optimizing Gradient-driven Criteria in Network Sparsity
Yu-xin Zhang
Mingbao Lin
Mengzhao Chen
Fei Chao
Rongrong Ji
36
5
0
30 Jan 2022
Sparse is Enough in Scaling Transformers
Sebastian Jaszczur
Aakanksha Chowdhery
Afroz Mohiuddin
Lukasz Kaiser
Wojciech Gajewski
Henryk Michalewski
Jonni Kanerva
MoE
21
100
0
24 Nov 2021
NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM
Connor Holmes
Minjia Zhang
Yuxiong He
Bo Wu
29
18
0
28 Oct 2021
Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks
Moshe Eliasof
Ben Bodner
Eran Treister
GNN
30
7
0
10 Oct 2021
Neuro-Symbolic AI: An Emerging Class of AI Workloads and their Characterization
Zachary Susskind
Bryce Arden
L. John
Patrick A Stockton
E. John
NAI
22
40
0
13 Sep 2021
Towards Structured Dynamic Sparse Pre-Training of BERT
A. Dietrich
Frithjof Gressmann
Douglas Orr
Ivan Chelombiev
Daniel Justus
Carlo Luschi
14
17
0
13 Aug 2021
Group Fisher Pruning for Practical Network Compression
Liyang Liu
Shilong Zhang
Zhanghui Kuang
Aojun Zhou
Jingliang Xue
Xinjiang Wang
Yimin Chen
Wenming Yang
Q. Liao
Wayne Zhang
25
146
0
02 Aug 2021
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
Alexandra Peste
Eugenia Iofinova
Adrian Vladu
Dan Alistarh
AI4CE
8
68
0
23 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
34
111
0
19 Jun 2021
Dynamic Sparse Training for Deep Reinforcement Learning
Ghada Sokar
Elena Mocanu
D. Mocanu
Mykola Pechenizkiy
Peter Stone
21
52
0
08 Jun 2021
1xN Pattern for Pruning Convolutional Neural Networks
Mingbao Lin
Yu-xin Zhang
Yuchao Li
Bohong Chen
Fei Chao
Mengdi Wang
Shen Li
Yonghong Tian
Rongrong Ji
3DPC
28
40
0
31 May 2021
Search Spaces for Neural Model Training
Darko Stosic
Dusan Stosic
10
4
0
27 May 2021
Dynamic Probabilistic Pruning: A general framework for hardware-constrained pruning at different granularities
L. Gonzalez-Carabarin
Iris A. M. Huijben
Bastian Veeling
A. Schmid
Ruud J. G. van Sloun
11
10
0
26 May 2021
Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression
Baeseong Park
S. Kwon
Daehwan Oh
Byeongwook Kim
Dongsoo Lee
8
3
0
05 May 2021
Accelerating Sparse Deep Neural Networks
Asit K. Mishra
J. Latorre
Jeff Pool
Darko Stosic
Dusan Stosic
Ganesh Venkatesh
Chong Yu
Paulius Micikevicius
11
221
0
16 Apr 2021
Sparse Training Theory for Scalable and Efficient Agents
D. Mocanu
Elena Mocanu
T. Pinto
Selima Curci
Phuong H. Nguyen
M. Gibescu
D. Ernst
Z. Vale
45
17
0
02 Mar 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
44
110
0
16 Feb 2021
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
Bag of Tricks for Image Classification with Convolutional Neural Networks
Tong He
Zhi-Li Zhang
Hang Zhang
Zhongyue Zhang
Junyuan Xie
Mu Li
221
1,399
0
04 Dec 2018
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,561
0
17 Apr 2017
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
Previous
1
2
3