Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.11106
Cited By
Self-Supervised Quantization-Aware Knowledge Distillation
17 March 2024
Kaiqi Zhao
Ming Zhao
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Self-Supervised Quantization-Aware Knowledge Distillation"
4 / 4 papers shown
Title
Scaling Up On-Device LLMs via Active-Weight Swapping Between DRAM and Flash
Fucheng Jia
Zewen Wu
Shiqi Jiang
Huiqiang Jiang
Qianxi Zhang
Y. Yang
Yunxin Liu
Ju Ren
Deyu Zhang
Ting Cao
97
0
0
11 Apr 2025
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks
Cuong Pham
Tuan Hoang
Thanh-Toan Do
FedML
MQ
21
14
0
27 Oct 2022
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark
Yuhang Li
Mingzhu Shen
Jian Ma
Yan Ren
Mingxin Zhao
Qi Zhang
Ruihao Gong
F. Yu
Junjie Yan
MQ
35
49
0
05 Nov 2021
Bag of Tricks for Image Classification with Convolutional Neural Networks
Tong He
Zhi-Li Zhang
Hang Zhang
Zhongyue Zhang
Junyuan Xie
Mu Li
216
1,398
0
04 Dec 2018
1