Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.14765
Cited By
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima
30 April 2020
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima"
7 / 7 papers shown
Title
SCoTTi: Save Computation at Training Time with an adaptive framework
Ziyu Li
Enzo Tartaglione
Van-Tam Nguyen
31
0
0
19 Dec 2023
Towards Efficient Capsule Networks
Riccardo Renzulli
Marco Grangetto
OCL
22
4
0
19 Aug 2022
The rise of the lottery heroes: why zero-shot pruning is hard
Enzo Tartaglione
21
6
0
24 Feb 2022
SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks
Enzo Tartaglione
Andrea Bragagnolo
Francesco Odierna
A. Fiandrotti
Marco Grangetto
38
18
0
07 Feb 2021
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks
Enzo Tartaglione
Andrea Bragagnolo
A. Fiandrotti
Marco Grangetto
ODL
UQCV
13
34
0
16 Nov 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
382
0
05 Mar 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,888
0
15 Sep 2016
1