ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.09769
  4. Cited By
Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning
  and Quantization Rates using ADMM

Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM

23 March 2019
Shaokai Ye
Xiaoyu Feng
Tianyun Zhang
Xiaolong Ma
Sheng Lin
Z. Li
Kaidi Xu
Wujie Wen
Sijia Liu
Jian Tang
M. Fardad
X. Lin
Yongpan Liu
Yanzhi Wang
    MQ
ArXivPDFHTML

Papers citing "Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM"

4 / 4 papers shown
Title
AlphaGAN: Fully Differentiable Architecture Search for Generative
  Adversarial Networks
AlphaGAN: Fully Differentiable Architecture Search for Generative Adversarial Networks
Yuesong Tian
Li Shen
Li Shen
Guinan Su
Zhifeng Li
Wei Liu
GAN
19
32
0
16 Jun 2020
Active Subspace of Neural Networks: Structural Analysis and Universal
  Attacks
Active Subspace of Neural Networks: Structural Analysis and Universal Attacks
Chunfeng Cui
Kaiqi Zhang
Talgat Daulbaev
Julia Gusak
Ivan V. Oseledets
Zheng-Wei Zhang
AAML
24
25
0
29 Oct 2019
Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar
  Framework for Ultra Efficient DNN Implementation
Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation
Xiaolong Ma
Geng Yuan
Sheng Lin
Caiwen Ding
Fuxun Yu
Tao Liu
Wujie Wen
Xiang Chen
Yanzhi Wang
MQ
8
45
0
27 Aug 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
1