Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2001.03253
Cited By
Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
9 January 2020
Noah Gamboa
Kais Kudrolli
Anand Dhoot
A. Pedram
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators"
3 / 3 papers shown
Title
Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments
Angie Boggust
Venkatesh Sivaraman
Yannick Assogba
Donghao Ren
Dominik Moritz
Fred Hohman
VLM
50
3
0
06 Aug 2024
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
27
71
0
04 Mar 2021
FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training
Sangkug Lym
M. Erez
13
25
0
27 Apr 2020
1