ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.11035
  4. Cited By
Position-based Scaled Gradient for Model Quantization and Pruning

Position-based Scaled Gradient for Model Quantization and Pruning

22 May 2020
Jangho Kim
Kiyoon Yoo
Nojun Kwak
    MQ
ArXivPDFHTML

Papers citing "Position-based Scaled Gradient for Model Quantization and Pruning"

5 / 5 papers shown
Title
Self-Distilled Quantization: Achieving High Compression Rates in
  Transformer-Based Language Models
Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models
James OÑeill
Sourav Dutta
VLM
MQ
32
1
0
12 Jul 2023
Prototype-based Personalized Pruning
Prototype-based Personalized Pruning
Jang-Hyun Kim
Simyung Chang
Sungrack Yun
Nojun Kwak
8
4
0
25 Mar 2021
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
383
0
05 Mar 2020
Feature Fusion for Online Mutual Knowledge Distillation
Feature Fusion for Online Mutual Knowledge Distillation
Jangho Kim
Minsung Hyun
Inseop Chung
Nojun Kwak
FedML
24
91
0
19 Apr 2019
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
1