ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.14340
  4. Cited By
WoodFisher: Efficient Second-Order Approximation for Neural Network
  Compression
v1v2v3v4v5 (latest)

WoodFisher: Efficient Second-Order Approximation for Neural Network Compression

29 April 2020
Sidak Pal Singh
Dan Alistarh
ArXiv (abs)PDFHTMLGithub (51★)

Papers citing "WoodFisher: Efficient Second-Order Approximation for Neural Network Compression"

18 / 18 papers shown
Is Oracle Pruning the True Oracle?
Is Oracle Pruning the True Oracle?
Sicheng Feng
Keda Tao
Haoyu Wang
VLM
388
6
0
28 Nov 2024
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language
  Models
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language ModelsInternational Conference on Learning Representations (ICLR), 2023
Yi-Lin Sung
Jaehong Yoon
Mohit Bansal
VLM
356
22
0
04 Oct 2023
Why is the State of Neural Network Pruning so Confusing? On the
  Fairness, Comparison Setup, and Trainability in Network Pruning
Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
Huan Wang
Can Qin
Yue Bai
Yun Fu
308
25
0
12 Jan 2023
Pruning Neural Networks via Coresets and Convex Geometry: Towards No
  Assumptions
Pruning Neural Networks via Coresets and Convex Geometry: Towards No AssumptionsNeural Information Processing Systems (NeurIPS), 2022
M. Tukan
Loay Mualem
Alaa Maalouf
3DPC
318
26
0
18 Sep 2022
Trainability Preserving Neural Pruning
Trainability Preserving Neural PruningInternational Conference on Learning Representations (ICLR), 2022
Huan Wang
Yun Fu
AAML
333
28
0
25 Jul 2022
Dual Lottery Ticket Hypothesis
Dual Lottery Ticket HypothesisInternational Conference on Learning Representations (ICLR), 2022
Yue Bai
Haiquan Wang
Zhiqiang Tao
Kunpeng Li
Yun Fu
198
49
0
08 Mar 2022
Cyclical Pruning for Sparse Neural Networks
Cyclical Pruning for Sparse Neural Networks
Suraj Srinivas
Andrey Kuzmin
Markus Nagel
M. V. Baalen
Andrii Skliar
Tijmen Blankevoort
263
17
0
02 Feb 2022
UWC: Unit-wise Calibration Towards Rapid Network Compression
UWC: Unit-wise Calibration Towards Rapid Network CompressionBritish Machine Vision Conference (BMVC), 2022
Chen Lin
Zheyang Li
Bo Peng
Haoji Hu
Wenming Tan
Ye Ren
Shiliang Pu
MQ
116
1
0
17 Jan 2022
Deep Neural Compression Via Concurrent Pruning and Self-Distillation
Deep Neural Compression Via Concurrent Pruning and Self-Distillation
J. Ó. Neill
Sourav Dutta
H. Assem
VLM
161
5
0
30 Sep 2021
Compressing Neural Networks: Towards Determining the Optimal Layer-wise
  Decomposition
Compressing Neural Networks: Towards Determining the Optimal Layer-wise DecompositionNeural Information Processing Systems (NeurIPS), 2021
Lucas Liebenwein
Alaa Maalouf
O. Gal
Dan Feldman
Daniela Rus
305
54
0
23 Jul 2021
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models
Alexandra Peste
Dan Alistarh
Christoph H. Lampert
MU
161
38
0
08 Jul 2021
SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of
  Invariances in Domain Generalization
SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of Invariances in Domain Generalization
Soroosh Shahtalebi
Jean-Christophe Gagnon-Audet
Touraj Laleh
Mojtaba Faramarzi
Kartik Ahuja
Irina Rish
348
67
0
04 Jun 2021
Dynamical Isometry: The Missing Ingredient for Neural Network Pruning
Dynamical Isometry: The Missing Ingredient for Neural Network Pruning
Huan Wang
Can Qin
Yue Bai
Y. Fu
140
6
0
12 May 2021
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
  Accuracy
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test AccuracyConference on Machine Learning and Systems (MLSys), 2021
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
287
87
0
04 Mar 2021
Neural Network Compression for Noisy Storage Devices
Neural Network Compression for Noisy Storage DevicesACM Transactions on Embedded Computing Systems (TECS), 2021
Berivan Isik
Kristy Choi
Xin-Yang Zheng
Tsachy Weissman
Stefano Ermon
H. P. Wong
Armin Alaghi
394
13
0
15 Feb 2021
Neural Pruning via Growing Regularization
Neural Pruning via Growing RegularizationInternational Conference on Learning Representations (ICLR), 2020
Huan Wang
Can Qin
Yulun Zhang
Y. Fu
357
186
0
16 Dec 2020
Learning explanations that are hard to vary
Learning explanations that are hard to varyInternational Conference on Learning Representations (ICLR), 2020
Giambattista Parascandolo
Alexander Neitz
Antonio Orvieto
Luigi Gresele
Bernhard Schölkopf
FAtt
491
218
0
01 Sep 2020
Revisiting Loss Modelling for Unstructured Pruning
Revisiting Loss Modelling for Unstructured Pruning
César Laurent
Camille Ballas
Thomas George
Nicolas Ballas
Pascal Vincent
195
17
0
22 Jun 2020
1
Page 1 of 1