ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.09450
  4. Cited By
Few Shot Network Compression via Cross Distillation

Few Shot Network Compression via Cross Distillation

21 November 2019
Haoli Bai
Jiaxiang Wu
Irwin King
Michael Lyu
    FedML
ArXivPDFHTML

Papers citing "Few Shot Network Compression via Cross Distillation"

12 / 12 papers shown
Title
Compressing Model with Few Class-Imbalance Samples: An Out-of-Distribution Expedition
Compressing Model with Few Class-Imbalance Samples: An Out-of-Distribution Expedition
Tian-Shuang Wu
Shen-Huan Lyu
Ning Chen
Zhihao Qu
Baoliu Ye
OODD
44
0
0
09 Feb 2025
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
CrossKD: Cross-Head Knowledge Distillation for Object Detection
CrossKD: Cross-Head Knowledge Distillation for Object Detection
Jiabao Wang
Yuming Chen
Zhaohui Zheng
Xiang Li
Ming-Ming Cheng
Qibin Hou
40
32
0
20 Jun 2023
Function-Consistent Feature Distillation
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
44
18
0
24 Apr 2023
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Yong Wu
Shekhor Chanda
M. Hosseinzadeh
Zhi Liu
Yang Wang
VLM
29
7
0
18 Oct 2022
MLink: Linking Black-Box Models from Multiple Domains for Collaborative
  Inference
MLink: Linking Black-Box Models from Multiple Domains for Collaborative Inference
Mu Yuan
Lan Zhang
Zimu Zheng
Yi-Nan Zhang
Xiang-Yang Li
22
2
0
28 Sep 2022
Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased
  Scene Graph Generation
Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased Scene Graph Generation
Xingning Dong
Tian Gan
Xuemeng Song
Jianlong Wu
Yuan-Chia Cheng
Liqiang Nie
24
92
0
18 Mar 2022
Towards Efficient Post-training Quantization of Pre-trained Language
  Models
Towards Efficient Post-training Quantization of Pre-trained Language Models
Haoli Bai
Lu Hou
Lifeng Shang
Xin Jiang
Irwin King
M. Lyu
MQ
76
47
0
30 Sep 2021
BinaryBERT: Pushing the Limit of BERT Quantization
BinaryBERT: Pushing the Limit of BERT Quantization
Haoli Bai
Wei Zhang
Lu Hou
Lifeng Shang
Jing Jin
Xin Jiang
Qun Liu
Michael Lyu
Irwin King
MQ
142
221
0
31 Dec 2020
Black-Box Ripper: Copying black-box models using generative evolutionary
  algorithms
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
MIACV
MLAU
22
43
0
21 Oct 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,837
0
09 Jun 2020
Neural Network Pruning with Residual-Connections and Limited-Data
Neural Network Pruning with Residual-Connections and Limited-Data
Jian-Hao Luo
Jianxin Wu
14
112
0
19 Nov 2019
1