ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.11048
  4. Cited By
Quantized Sparse Weight Decomposition for Neural Network Compression

Quantized Sparse Weight Decomposition for Neural Network Compression

22 July 2022
Andrey Kuzmin
M. V. Baalen
Markus Nagel
Arash Behboodi
    MQ
ArXivPDFHTML

Papers citing "Quantized Sparse Weight Decomposition for Neural Network Compression"

2 / 2 papers shown
Title
Lossy and Lossless (L$^2$) Post-training Model Size Compression
Lossy and Lossless (L2^22) Post-training Model Size Compression
Yumeng Shi
Shihao Bai
Xiuying Wei
Ruihao Gong
Jianlei Yang
16
3
0
08 Aug 2023
Low Rank Optimization for Efficient Deep Learning: Making A Balance
  between Compact Architecture and Fast Training
Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training
Xinwei Ou
Zhangxin Chen
Ce Zhu
Yipeng Liu
31
4
0
22 Mar 2023
1