Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.05422
Cited By
SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks
11 October 2019
Cenk Baykal
Lucas Liebenwein
Igor Gilitschenski
Dan Feldman
Daniela Rus
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks"
7 / 7 papers shown
Title
Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments
Angie Boggust
Venkatesh Sivaraman
Yannick Assogba
Donghao Ren
Dominik Moritz
Fred Hohman
VLM
63
3
0
06 Aug 2024
Pruning has a disparate impact on model accuracy
Cuong Tran
Ferdinando Fioretto
Jung-Eun Kim
Rakshit Naidu
48
38
0
26 May 2022
Revisiting Random Channel Pruning for Neural Network Compression
Yawei Li
Kamil Adamczewski
Wen Li
Shuhang Gu
Radu Timofte
Luc Van Gool
37
84
0
11 May 2022
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
31
13
0
12 Oct 2021
Sparse Flows: Pruning Continuous-depth Models
Lucas Liebenwein
Ramin Hasani
Alexander Amini
Daniela Rus
26
16
0
24 Jun 2021
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
James Diffenderfer
Brian Bartoldson
Shreya Chaganti
Jize Zhang
B. Kailkhura
OOD
31
69
0
16 Jun 2021
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
42
71
0
04 Mar 2021
1