Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.12230
Cited By
Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training
21 June 2023
A. Nowak
Bram Grooten
D. Mocanu
Jacek Tabor
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training"
8 / 8 papers shown
Title
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
86
0
0
30 Apr 2025
Navigating Extremes: Dynamic Sparsity in Large Output Spaces
Nasib Ullah
Erik Schultheis
Mike Lasby
Yani Andrew Ioannou
Rohit Babbar
35
0
0
05 Nov 2024
RelChaNet: Neural Network Feature Selection using Relative Change Scores
Felix Zimmer
24
0
0
03 Oct 2024
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
D. Mocanu
Elena Mocanu
OOD
3DH
52
0
0
03 Oct 2024
Truly Sparse Neural Networks at Scale
Selima Curci
D. Mocanu
Mykola Pechenizkiy
28
19
0
02 Feb 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
141
684
0
31 Jan 2021
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
188
1,027
0
06 Mar 2020
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,194
0
01 Sep 2014
1