ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.01435
  4. Cited By
Improved Techniques for Quantizing Deep Networks with Adaptive
  Bit-Widths

Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths

2 March 2021
Ximeng Sun
Rameswar Panda
Chun-Fu Chen
Naigang Wang
Bowen Pan
Bowen Pan Kailash Gopalakrishnan
A. Oliva
Rogerio Feris
Kate Saenko
    MQ
ArXivPDFHTML

Papers citing "Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths"

4 / 4 papers shown
Title
MBQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width
  Network Quantization
MBQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization
Yunshan Zhong
Yuyao Zhou
Fei Chao
Rongrong Ji
MQ
13
1
0
14 May 2023
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
221
196
0
07 Feb 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
187
472
0
12 Jun 2018
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
267
402
0
09 Apr 2018
1