Title |
---|
![]() Pruning neural networks without any data by iteratively conserving
synaptic flowNeural Information Processing Systems (NeurIPS), 2025 |
![]() A Framework for Neural Network Pruning Using Gibbs DistributionsGlobal Communications Conference (GLOBECOM), 2021 |
![]() Convolution-Weight-Distribution Assumption: Rethinking the Criteria of
Channel PruningNeural Information Processing Systems (NeurIPS), 2025 |
![]() Sparse Networks from Scratch: Faster Training without Losing Performance Tim Dettmers Luke Zettlemoyer |