Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2301.07966
Cited By
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions
19 January 2023
Junyang Cai
Khai-Nguyen Nguyen
Nishant Shrestha
Aidan Good
Ruisen Tu
Xin Yu
Shandian Zhe
Thiago Serra
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions"
7 / 7 papers shown
Title
Tightening convex relaxations of trained neural networks: a unified approach for convex and S-shaped activations
Pablo Carrasco
Gonzalo Muñoz
59
2
0
30 Oct 2024
Optimization Over Trained Neural Networks: Taking a Relaxing Walk
Jiatai Tong
Junyang Cai
Thiago Serra
60
6
0
07 Jan 2024
Computational Tradeoffs of Optimization-Based Bound Tightening in ReLU Networks
Fabian Badilla
Marcos Goycoolea
Gonzalo Muñoz
Thiago Serra
57
7
0
27 Dec 2023
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
91
32
0
29 Apr 2023
Pruning has a disparate impact on model accuracy
Cuong Tran
Ferdinando Fioretto
Jung-Eun Kim
Rakshit Naidu
32
38
0
26 May 2022
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
139
684
0
31 Jan 2021
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
178
1,027
0
06 Mar 2020
1