ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.01174
  4. Cited By
Transform Quantization for CNN (Convolutional Neural Network)
  Compression

Transform Quantization for CNN (Convolutional Neural Network) Compression

2 September 2020
Sean I. Young
Wang Zhe
David S. Taubman
B. Girod
    MQ
ArXivPDFHTML

Papers citing "Transform Quantization for CNN (Convolutional Neural Network) Compression"

9 / 9 papers shown
Title
Radio: Rate-Distortion Optimization for Large Language Model Compression
Radio: Rate-Distortion Optimization for Large Language Model Compression
Sean I. Young
MQ
21
0
0
05 May 2025
Foundations of Large Language Model Compression -- Part 1: Weight
  Quantization
Foundations of Large Language Model Compression -- Part 1: Weight Quantization
Sean I. Young
MQ
37
1
0
03 Sep 2024
Matrix Compression via Randomized Low Rank and Low Precision
  Factorization
Matrix Compression via Randomized Low Rank and Low Precision Factorization
R. Saha
Varun Srivastava
Mert Pilanci
18
19
0
17 Oct 2023
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
26
11
0
06 Apr 2022
Minimax Optimal Quantization of Linear Models: Information-Theoretic
  Limits and Efficient Algorithms
Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
R. Saha
Mert Pilanci
Andrea J. Goldsmith
MQ
17
3
0
23 Feb 2022
Illumination and Temperature-Aware Multispectral Networks for
  Edge-Computing-Enabled Pedestrian Detection
Illumination and Temperature-Aware Multispectral Networks for Edge-Computing-Enabled Pedestrian Detection
Yifan Zhuang
Ziyuan Pu
Jia Hu
Yinhai Wang
12
24
0
09 Dec 2021
Towards Efficient Post-training Quantization of Pre-trained Language
  Models
Towards Efficient Post-training Quantization of Pre-trained Language Models
Haoli Bai
Lu Hou
Lifeng Shang
Xin Jiang
Irwin King
M. Lyu
MQ
71
47
0
30 Sep 2021
An Information-Theoretic Justification for Model Pruning
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
84
35
0
16 Feb 2021
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
1