Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.08099
Cited By
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
17 May 2022
Paul Wimmer
Jens Mehnert
A. P. Condurache
DD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey"
22 / 22 papers shown
Title
PROM: Prioritize Reduction of Multiplications Over Lower Bit-Widths for Efficient CNNs
Lukas Meiner
Jens Mehnert
A. P. Condurache
MQ
37
0
0
06 May 2025
Towards Symmetric Low-Rank Adapters
Tales Panoutsos
Rodrygo L. T. Santos
Flavio Figueiredo
26
0
0
29 Mar 2025
Learning effective pruning at initialization from iterative pruning
Shengkai Liu
Yaofeng Cheng
Fusheng Zha
Wei Guo
Lining Sun
Zhenshan Bing
Chenguang Yang
33
0
0
27 Aug 2024
Achieving More with Less: A Tensor-Optimization-Powered Ensemble Method
Jinghui Yuan
Weijin Jiang
Zhe Cao
Fangyuan Xie
Rong Wang
Feiping Nie
Yuan Yuan
21
3
0
06 Aug 2024
AB-Training: A Communication-Efficient Approach for Distributed Low-Rank Learning
D. Coquelin
Katherina Flügel
Marie Weiel
Nicholas Kiefer
Muhammed Öz
Charlotte Debus
Achim Streit
Markus Goetz
29
0
0
02 May 2024
LUM-ViT: Learnable Under-sampling Mask Vision Transformer for Bandwidth Limited Optical Signal Acquisition
Lingfeng Liu
Dong Ni
Hangjie Yuan
ViT
27
0
0
03 Mar 2024
Transferability of Winning Lottery Tickets in Neural Network Differential Equation Solvers
Edward Prideaux-Ghee
24
0
0
16 Jun 2023
Structured Pruning for Deep Convolutional Neural Networks: A survey
Yang He
Lingao Xiao
3DPC
28
116
0
01 Mar 2023
One-shot Network Pruning at Initialization with Discriminative Image Patches
Yinan Yang
Yu Wang
Yi Ji
Heng Qi
Jien Kato
VLM
23
4
0
13 Sep 2022
Signing the Supermask: Keep, Hide, Invert
Nils Koster
O. Grothe
Achim Rettinger
23
10
0
31 Jan 2022
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
87
54
0
01 Oct 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
44
110
0
16 Feb 2021
A Unified Paths Perspective for Pruning at Initialization
Thomas Gebhart
Udit Saxena
Paul Schrater
33
14
0
26 Jan 2021
The Lottery Ticket Hypothesis for Object Recognition
Sharath Girish
Shishira R. Maiya
Kamal Gupta
Hao Chen
L. Davis
Abhinav Shrivastava
75
60
0
08 Dec 2020
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
376
0
23 Jul 2020
Meta Pseudo Labels
Hieu H. Pham
Zihang Dai
Qizhe Xie
Minh-Thang Luong
Quoc V. Le
VLM
248
656
0
23 Mar 2020
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
178
1,027
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
220
348
0
14 Jun 2018
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
114
577
0
27 Feb 2015
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
243
7,620
0
03 Jul 2012
1