ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.12368
  4. Cited By
Towards Efficient Model Compression via Learned Global Ranking

Towards Efficient Model Compression via Learned Global Ranking

28 April 2019
Ting-Wu Chin
Ruizhou Ding
Cha Zhang
Diana Marculescu
ArXivPDFHTML

Papers citing "Towards Efficient Model Compression via Learned Global Ranking"

19 / 19 papers shown
Title
Advancing Weight and Channel Sparsification with Enhanced Saliency
Advancing Weight and Channel Sparsification with Enhanced Saliency
Xinglong Sun
Maying Shen
Hongxu Yin
Lei Mao
Pavlo Molchanov
Jose M. Alvarez
48
1
0
05 Feb 2025
Boosting Convolutional Neural Networks with Middle Spectrum Grouped
  Convolution
Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution
Z. Su
Jiehua Zhang
Tianpeng Liu
Zhen Liu
Shuanghui Zhang
M. Pietikäinen
Li Liu
27
2
0
13 Apr 2023
Network Pruning via Feature Shift Minimization
Network Pruning via Feature Shift Minimization
Y. Duan
Yue Zhou
Peng He
Qiang Liu
Shukai Duan
Xiaofang Hu
18
4
0
06 Jul 2022
QADAM: Quantization-Aware DNN Accelerator Modeling for Pareto-Optimality
QADAM: Quantization-Aware DNN Accelerator Modeling for Pareto-Optimality
A. Inci
Siri Garudanagiri Virupaksha
Aman Jain
Venkata Vivek Thallam
Ruizhou Ding
Diana Marculescu
MQ
24
2
0
20 May 2022
QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN
  Accelerators
QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators
A. Inci
Siri Garudanagiri Virupaksha
Aman Jain
Venkata Vivek Thallam
Ruizhou Ding
Diana Marculescu
MQ
11
5
0
17 May 2022
Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter
  Pruning
Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning
Seunghyun Lee
B. Song
19
8
0
05 Mar 2022
Pruning-aware Sparse Regularization for Network Pruning
Pruning-aware Sparse Regularization for Network Pruning
Nanfei Jiang
Xu Zhao
Chaoyang Zhao
Yongqi An
Ming Tang
Jinqiao Wang
3DPC
16
12
0
18 Jan 2022
GhostNets on Heterogeneous Devices via Cheap Operations
GhostNets on Heterogeneous Devices via Cheap Operations
Kai Han
Yunhe Wang
Chang Xu
Jianyuan Guo
Chunjing Xu
Enhua Wu
Qi Tian
19
102
0
10 Jan 2022
Network Compression via Central Filter
Network Compression via Central Filter
Y. Duan
Xiaofang Hu
Yue Zhou
Qiang Liu
Shukai Duan
3DPC
14
1
0
10 Dec 2021
Batch Normalization Tells You Which Filter is Important
Batch Normalization Tells You Which Filter is Important
Junghun Oh
Heewon Kim
Sungyong Baik
Chee Hong
Kyoung Mu Lee
CVBM
21
8
0
02 Dec 2021
Improved Knowledge Distillation via Adversarial Collaboration
Improved Knowledge Distillation via Adversarial Collaboration
Zhiqiang Liu
Chengkai Huang
Yanxia Liu
21
2
0
29 Nov 2021
Class-Discriminative CNN Compression
Class-Discriminative CNN Compression
Yuchen Liu
D. Wentzlaff
S. Kung
26
1
0
21 Oct 2021
SMOF: Squeezing More Out of Filters Yields Hardware-Friendly CNN Pruning
SMOF: Squeezing More Out of Filters Yields Hardware-Friendly CNN Pruning
Yanli Liu
Bochen Guan
Qinwen Xu
Weiyi Li
Shuxue Quan
25
2
0
21 Oct 2021
RingCNN: Exploiting Algebraically-Sparse Ring Tensors for
  Energy-Efficient CNN-Based Computational Imaging
RingCNN: Exploiting Algebraically-Sparse Ring Tensors for Energy-Efficient CNN-Based Computational Imaging
Chao-Tsung Huang
32
10
0
19 Apr 2021
DeepNVM++: Cross-Layer Modeling and Optimization Framework of
  Non-Volatile Memories for Deep Learning
DeepNVM++: Cross-Layer Modeling and Optimization Framework of Non-Volatile Memories for Deep Learning
A. Inci
Mehmet Meric Isgenc
Diana Marculescu
13
20
0
08 Dec 2020
Third ArchEdge Workshop: Exploring the Design Space of Efficient Deep
  Neural Networks
Third ArchEdge Workshop: Exploring the Design Space of Efficient Deep Neural Networks
Fuxun Yu
Dimitrios Stamoulis
Di Wang
Dimitrios Lymberopoulos
Xiang Chen
3DV
8
1
0
22 Nov 2020
Joint Pruning & Quantization for Extremely Sparse Neural Networks
Joint Pruning & Quantization for Extremely Sparse Neural Networks
Po-Hsiang Yu
Sih-Sian Wu
Jan P. Klopp
Liang-Gee Chen
Shao-Yi Chien
MQ
14
14
0
05 Oct 2020
PoPS: Policy Pruning and Shrinking for Deep Reinforcement Learning
PoPS: Policy Pruning and Shrinking for Deep Reinforcement Learning
Dor Livne
Kobi Cohen
24
50
0
14 Jan 2020
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile
  Applications
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
Tien-Ju Yang
Andrew G. Howard
Bo Chen
Xiao Zhang
Alec Go
Mark Sandler
Vivienne Sze
Hartwig Adam
90
515
0
09 Apr 2018
1