ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.14340
  4. Cited By
WoodFisher: Efficient Second-Order Approximation for Neural Network
  Compression
v1v2v3v4v5 (latest)

WoodFisher: Efficient Second-Order Approximation for Neural Network Compression

29 April 2020
Sidak Pal Singh
Dan Alistarh
ArXiv (abs)PDFHTMLGithub (51★)

Papers citing "WoodFisher: Efficient Second-Order Approximation for Neural Network Compression"

18 / 18 papers shown
Title
Is Oracle Pruning the True Oracle?
Is Oracle Pruning the True Oracle?
Sicheng Feng
Keda Tao
Haoyu Wang
VLM
220
2
0
28 Nov 2024
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language
  Models
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models
Yi-Lin Sung
Jaehong Yoon
Mohit Bansal
VLM
127
15
0
04 Oct 2023
Why is the State of Neural Network Pruning so Confusing? On the
  Fairness, Comparison Setup, and Trainability in Network Pruning
Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
Huan Wang
Can Qin
Yue Bai
Yun Fu
153
23
0
12 Jan 2023
Pruning Neural Networks via Coresets and Convex Geometry: Towards No
  Assumptions
Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions
M. Tukan
Loay Mualem
Alaa Maalouf
3DPC
123
24
0
18 Sep 2022
Trainability Preserving Neural Pruning
Trainability Preserving Neural Pruning
Huan Wang
Yun Fu
AAML
110
20
0
25 Jul 2022
Dual Lottery Ticket Hypothesis
Dual Lottery Ticket Hypothesis
Yue Bai
Haiquan Wang
Zhiqiang Tao
Kunpeng Li
Yun Fu
97
44
0
08 Mar 2022
Cyclical Pruning for Sparse Neural Networks
Cyclical Pruning for Sparse Neural Networks
Suraj Srinivas
Andrey Kuzmin
Markus Nagel
M. V. Baalen
Andrii Skliar
Tijmen Blankevoort
118
14
0
02 Feb 2022
UWC: Unit-wise Calibration Towards Rapid Network Compression
UWC: Unit-wise Calibration Towards Rapid Network Compression
Chen Lin
Zheyang Li
Bo Peng
Haoji Hu
Wenming Tan
Ye Ren
Shiliang Pu
MQ
59
1
0
17 Jan 2022
Deep Neural Compression Via Concurrent Pruning and Self-Distillation
Deep Neural Compression Via Concurrent Pruning and Self-Distillation
J. Ó. Neill
Sourav Dutta
H. Assem
VLM
91
5
0
30 Sep 2021
Compressing Neural Networks: Towards Determining the Optimal Layer-wise
  Decomposition
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
Lucas Liebenwein
Alaa Maalouf
O. Gal
Dan Feldman
Daniela Rus
119
50
0
23 Jul 2021
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models
Alexandra Peste
Dan Alistarh
Christoph H. Lampert
MU
79
35
0
08 Jul 2021
SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of
  Invariances in Domain Generalization
SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of Invariances in Domain Generalization
Soroosh Shahtalebi
Jean-Christophe Gagnon-Audet
Touraj Laleh
Mojtaba Faramarzi
Kartik Ahuja
Irina Rish
158
61
0
04 Jun 2021
Dynamical Isometry: The Missing Ingredient for Neural Network Pruning
Dynamical Isometry: The Missing Ingredient for Neural Network Pruning
Huan Wang
Can Qin
Yue Bai
Y. Fu
78
5
0
12 May 2021
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
  Accuracy
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
102
76
0
04 Mar 2021
Neural Network Compression for Noisy Storage Devices
Neural Network Compression for Noisy Storage Devices
Berivan Isik
Kristy Choi
Xin-Yang Zheng
Tsachy Weissman
Stefano Ermon
H. P. Wong
Armin Alaghi
106
13
0
15 Feb 2021
Neural Pruning via Growing Regularization
Neural Pruning via Growing RegularizationInternational Conference on Learning Representations (ICLR), 2025
Huan Wang
Can Qin
Yulun Zhang
Y. Fu
180
165
0
16 Dec 2020
Learning explanations that are hard to vary
Learning explanations that are hard to varyInternational Conference on Learning Representations (ICLR), 2025
Giambattista Parascandolo
Alexander Neitz
Antonio Orvieto
Luigi Gresele
Bernhard Schölkopf
FAtt
155
199
0
01 Sep 2020
Revisiting Loss Modelling for Unstructured Pruning
Revisiting Loss Modelling for Unstructured Pruning
César Laurent
Camille Ballas
Thomas George
Nicolas Ballas
Pascal Vincent
96
14
0
22 Jun 2020
1