ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.15023
  4. Cited By
Post-training deep neural network pruning via layer-wise calibration

Post-training deep neural network pruning via layer-wise calibration

30 April 2021
Ivan Lazarevich
Alexander Kozlov
Nikita Malinin
    3DPC
ArXivPDFHTML

Papers citing "Post-training deep neural network pruning via layer-wise calibration"

19 / 19 papers shown
Title
EfficientLLaVA:Generalizable Auto-Pruning for Large Vision-language Models
EfficientLLaVA:Generalizable Auto-Pruning for Large Vision-language Models
Yinan Liang
Z. Wang
Xiuwei Xu
Jie Zhou
Jiwen Lu
VLM
LRM
48
0
0
19 Mar 2025
PTSBench: A Comprehensive Post-Training Sparsity Benchmark Towards
  Algorithms and Models
PTSBench: A Comprehensive Post-Training Sparsity Benchmark Towards Algorithms and Models
Zining Wnag
J. Guo
Ruihao Gong
Yang Yong
Aishan Liu
Yushi Huang
Jiaheng Liu
X. Liu
71
1
0
10 Dec 2024
STAT: Shrinking Transformers After Training
STAT: Shrinking Transformers After Training
Megan Flynn
Alexander Wang
Dean Edward Alvarez
Christopher De Sa
Anil Damle
31
2
0
29 May 2024
Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity
  Allocation with Global Constraint in Minutes
Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity Allocation with Global Constraint in Minutes
Ruihao Gong
Yang Yong
Zining Wang
Jinyang Guo
Xiuying Wei
Yuqing Ma
Xianglong Liu
31
5
0
09 May 2024
TinySeg: Model Optimizing Framework for Image Segmentation on Tiny
  Embedded Systems
TinySeg: Model Optimizing Framework for Image Segmentation on Tiny Embedded Systems
Byungchul Chae
Jiae Kim
Seonyeong Heo
VLM
25
0
0
03 May 2024
Structurally Prune Anything: Any Architecture, Any Framework, Any Time
Structurally Prune Anything: Any Architecture, Any Framework, Any Time
Xun Wang
John Rachwan
Stephan Günnemann
Bertrand Charpentier
33
4
0
03 Mar 2024
Generalizability of Mixture of Domain-Specific Adapters from the Lens of
  Signed Weight Directions and its Application to Effective Model Pruning
Generalizability of Mixture of Domain-Specific Adapters from the Lens of Signed Weight Directions and its Application to Effective Model Pruning
Tuc Nguyen
Thai Le
MoMe
28
3
0
16 Feb 2024
Accelerating Learnt Video Codecs with Gradient Decay and Layer-wise
  Distillation
Accelerating Learnt Video Codecs with Gradient Decay and Layer-wise Distillation
Tianhao Peng
Ge Gao
Heming Sun
Fan Zhang
David Bull
16
4
0
05 Dec 2023
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language
  Models
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models
Yi-Lin Sung
Jaehong Yoon
Mohit Bansal
VLM
15
14
0
04 Oct 2023
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep
  Neural Networks
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks
Kaixin Xu
Zhe Wang
Xue Geng
Jie Lin
Min-man Wu
Xiaoli Li
Weisi Lin
18
15
0
21 Aug 2023
Lossy and Lossless (L$^2$) Post-training Model Size Compression
Lossy and Lossless (L2^22) Post-training Model Size Compression
Yumeng Shi
Shihao Bai
Xiuying Wei
Ruihao Gong
Jianlei Yang
16
3
0
08 Aug 2023
Accurate Retraining-free Pruning for Pretrained Encoder-based Language
  Models
Accurate Retraining-free Pruning for Pretrained Encoder-based Language Models
Seungcheol Park
Ho-Jin Choi
U. Kang
VLM
25
5
0
07 Aug 2023
Post-training Model Quantization Using GANs for Synthetic Data
  Generation
Post-training Model Quantization Using GANs for Synthetic Data Generation
Athanasios Masouris
Mansi Sharma
Adrian Boguszewski
Alexander Kozlov
Zhuo Wu
Raymond Lo
MQ
13
0
0
10 May 2023
Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning
Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning
Danyang Liu
Xue Liu
20
0
0
24 Dec 2022
QFT: Post-training quantization via fast joint finetuning of all degrees
  of freedom
QFT: Post-training quantization via fast joint finetuning of all degrees of freedom
Alexander Finkelstein
Ella Fuchs
Idan Tal
Mark Grobman
Niv Vosco
Eldad Meller
MQ
21
6
0
05 Dec 2022
SVD-NAS: Coupling Low-Rank Approximation and Neural Architecture Search
SVD-NAS: Coupling Low-Rank Approximation and Neural Architecture Search
Zhewen Yu
C. Bouganis
22
4
0
22 Aug 2022
A Fast Post-Training Pruning Framework for Transformers
A Fast Post-Training Pruning Framework for Transformers
Woosuk Kwon
Sehoon Kim
Michael W. Mahoney
Joseph Hassoun
Kurt Keutzer
A. Gholami
18
143
0
29 Mar 2022
Pre-training without Natural Images
Pre-training without Natural Images
Hirokatsu Kataoka
Kazushige Okayasu
Asato Matsumoto
Eisuke Yamagata
Ryosuke Yamada
Nakamasa Inoue
Akio Nakamura
Y. Satoh
79
116
0
21 Jan 2021
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
185
1,027
0
06 Mar 2020
1