ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.11106
  4. Cited By
Self-Supervised Quantization-Aware Knowledge Distillation

Self-Supervised Quantization-Aware Knowledge Distillation

17 March 2024
Kaiqi Zhao
Ming Zhao
    MQ
ArXivPDFHTML

Papers citing "Self-Supervised Quantization-Aware Knowledge Distillation"

4 / 4 papers shown
Title
Scaling Up On-Device LLMs via Active-Weight Swapping Between DRAM and Flash
Scaling Up On-Device LLMs via Active-Weight Swapping Between DRAM and Flash
Fucheng Jia
Zewen Wu
Shiqi Jiang
Huiqiang Jiang
Qianxi Zhang
Y. Yang
Yunxin Liu
Ju Ren
Deyu Zhang
Ting Cao
97
0
0
11 Apr 2025
Collaborative Multi-Teacher Knowledge Distillation for Learning Low
  Bit-width Deep Neural Networks
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks
Cuong Pham
Tuan Hoang
Thanh-Toan Do
FedML
MQ
21
14
0
27 Oct 2022
MQBench: Towards Reproducible and Deployable Model Quantization
  Benchmark
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark
Yuhang Li
Mingzhu Shen
Jian Ma
Yan Ren
Mingxin Zhao
Qi Zhang
Ruihao Gong
F. Yu
Junjie Yan
MQ
35
49
0
05 Nov 2021
Bag of Tricks for Image Classification with Convolutional Neural
  Networks
Bag of Tricks for Image Classification with Convolutional Neural Networks
Tong He
Zhi-Li Zhang
Hang Zhang
Zhongyue Zhang
Junyuan Xie
Mu Li
216
1,398
0
04 Dec 2018
1