ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.15308
  4. Cited By
Parameter-Efficient and Student-Friendly Knowledge Distillation

Parameter-Efficient and Student-Friendly Knowledge Distillation

28 May 2022
Jun Rao
Xv Meng
Liang Ding
Shuhan Qi
Dacheng Tao
ArXivPDFHTML

Papers citing "Parameter-Efficient and Student-Friendly Knowledge Distillation"

34 / 34 papers shown
Title
DCSNet: A Lightweight Knowledge Distillation-Based Model with Explainable AI for Lung Cancer Diagnosis from Histopathological Images
DCSNet: A Lightweight Knowledge Distillation-Based Model with Explainable AI for Lung Cancer Diagnosis from Histopathological Images
Sadman Sakib Alif
Nasim Anzum Promise
Fiaz Al Abid
Aniqua Nusrat Zereen
18
0
0
14 May 2025
Vision Foundation Models in Medical Image Analysis: Advances and Challenges
Vision Foundation Models in Medical Image Analysis: Advances and Challenges
Pengchen Liang
Bin Pu
Haishan Huang
Yiwei Li
H. Wang
Weibo Ma
Qing Chang
VLM
MedIm
99
0
0
24 Feb 2025
Knowledge Distillation with Adapted Weight
Sirong Wu
Xi Luo
Junjie Liu
Yuhui Deng
38
0
0
06 Jan 2025
Distilled Transformers with Locally Enhanced Global Representations for Face Forgery Detection
Distilled Transformers with Locally Enhanced Global Representations for Face Forgery Detection
Yaning Zhang
Qiufu Li
Zitong Yu
L. Shen
ViT
45
3
0
31 Dec 2024
Efficient and Robust Knowledge Distillation from A Stronger Teacher
  Based on Correlation Matching
Efficient and Robust Knowledge Distillation from A Stronger Teacher Based on Correlation Matching
Wenqi Niu
Yingchao Wang
Guohui Cai
Hanpo Hou
24
0
0
09 Oct 2024
Gap Preserving Distillation by Building Bidirectional Mappings with A
  Dynamic Teacher
Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher
Yong Guo
Shulian Zhang
Haolin Pan
Jing Liu
Yulun Zhang
Jian Chen
33
0
0
05 Oct 2024
Multiple-Exit Tuning: Towards Inference-Efficient Adaptation for Vision
  Transformer
Multiple-Exit Tuning: Towards Inference-Efficient Adaptation for Vision Transformer
Zheng Liu
Jinchao Zhu
Nannan Li
Gao Huang
27
0
0
21 Sep 2024
Exploring and Enhancing the Transfer of Distribution in Knowledge
  Distillation for Autoregressive Language Models
Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models
Jun Rao
Xuebo Liu
Zepeng Lin
Liang Ding
Jing Li
Dacheng Tao
Min Zhang
34
2
0
19 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
58
10
0
05 Sep 2024
Adaptive Modality Balanced Online Knowledge Distillation for
  Brain-Eye-Computer based Dim Object Detection
Adaptive Modality Balanced Online Knowledge Distillation for Brain-Eye-Computer based Dim Object Detection
Zixing Li
Chao Yan
Zhen Lan
Xiaojia Xiang
Han Zhou
Jun Lai
Dengqing Tang
38
0
0
02 Jul 2024
Communication-Efficient Federated Knowledge Graph Embedding with
  Entity-Wise Top-K Sparsification
Communication-Efficient Federated Knowledge Graph Embedding with Entity-Wise Top-K Sparsification
Xiaoxiong Zhang
Zhiwei Zeng
Xin Zhou
Dusit Niyato
Zhiqi Shen
FedML
29
1
0
19 Jun 2024
Federated Distillation: A Survey
Federated Distillation: A Survey
Lin Li
Jianping Gou
Baosheng Yu
Lan Du
Zhang Yiand Dacheng Tao
DD
FedML
51
4
0
02 Apr 2024
iDAT: inverse Distillation Adapter-Tuning
iDAT: inverse Distillation Adapter-Tuning
Jiacheng Ruan
Jingsheng Gao
Mingye Xie
Daize Dong
Suncheng Xiang
Ting Liu
Yuzhuo Fu
46
1
0
23 Mar 2024
SPA: Towards A Computational Friendly Cloud-Base and On-Devices
  Collaboration Seq2seq Personalized Generation
SPA: Towards A Computational Friendly Cloud-Base and On-Devices Collaboration Seq2seq Personalized Generation
Yanming Liu
Xinyue Peng
Jiannan Cao
Le Dai
Xingzu Liu
Mingbang Wang
Weihao Liu
SyDa
36
2
0
11 Mar 2024
Boosting Residual Networks with Group Knowledge
Boosting Residual Networks with Group Knowledge
Shengji Tang
Peng Ye
Baopu Li
Wei Lin
Tao Chen
Tong He
Chong Yu
Wanli Ouyang
38
5
0
26 Aug 2023
Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language
  Pretraining?
Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language Pretraining?
Fei-Yue Wang
Liang Ding
Jun Rao
Ye Liu
Li Shen
Changxing Ding
32
15
0
24 Aug 2023
Reducing the gap between streaming and non-streaming Transducer-based
  ASR by adaptive two-stage knowledge distillation
Reducing the gap between streaming and non-streaming Transducer-based ASR by adaptive two-stage knowledge distillation
Haitao Tang
Yu Fu
Lei Sun
Jiabin Xue
Dan Liu
...
Zhiqiang Ma
Minghui Wu
Jia Pan
Genshun Wan
Ming’En Zhao
21
2
0
27 Jun 2023
Epistemic Graph: A Plug-And-Play Module For Hybrid Representation
  Learning
Epistemic Graph: A Plug-And-Play Module For Hybrid Representation Learning
Jin Yuan
Yang Zhang
Yangzhou Du
Zhongchao Shi
Xin Geng
Jianping Fan
Yong Rui
21
0
0
30 May 2023
One-stop Training of Multiple Capacity Models
One-stop Training of Multiple Capacity Models
Lan Jiang
Haoyang Huang
Dongdong Zhang
R. Jiang
Furu Wei
28
0
0
23 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
18
17
0
18 May 2023
Tailoring Instructions to Student's Learning Levels Boosts Knowledge
  Distillation
Tailoring Instructions to Student's Learning Levels Boosts Knowledge Distillation
Yuxin Ren
Zi-Qi Zhong
Xingjian Shi
Yi Zhu
Chun Yuan
Mu Li
13
7
0
16 May 2023
Visual Tuning
Visual Tuning
Bruce X. B. Yu
Jianlong Chang
Haixin Wang
Lin Liu
Shijie Wang
...
Lingxi Xie
Haojie Li
Zhouchen Lin
Qi Tian
Chang Wen Chen
VLM
41
38
0
10 May 2023
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge
  Collaborative AutoML System
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Chao Xue
W. Liu
Shunxing Xie
Zhenfang Wang
Jiaxing Li
...
Shi-Yong Chen
Yibing Zhan
Jing Zhang
Chaoyue Wang
Dacheng Tao
32
2
0
01 Mar 2023
Source-Free Unsupervised Domain Adaptation: A Survey
Source-Free Unsupervised Domain Adaptation: A Survey
Yuqi Fang
P. Yap
W. Lin
Hongtu Zhu
Mingxia Liu
130
89
0
31 Dec 2022
Toward Efficient Language Model Pretraining and Downstream Adaptation
  via Self-Evolution: A Case Study on SuperGLUE
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Qihuang Zhong
Liang Ding
Yibing Zhan
Yu Qiao
Yonggang Wen
...
Yixin Chen
Xinbo Gao
Chun Miao
Xiaoou Tang
Dacheng Tao
VLM
ELM
50
34
0
04 Dec 2022
Low-Resource Dense Retrieval for Open-Domain Question Answering: A
  Comprehensive Survey
Low-Resource Dense Retrieval for Open-Domain Question Answering: A Comprehensive Survey
Xiaoyu Shen
Svitlana Vakulenko
Marco Del Tredici
Gianni Barlacchi
Bill Byrne
Adria de Gispert
RALM
VLM
26
20
0
05 Aug 2022
Dynamic Contrastive Distillation for Image-Text Retrieval
Dynamic Contrastive Distillation for Image-Text Retrieval
Jun Rao
Liang Ding
Shuhan Qi
Meng Fang
Yang Liu
Liqiong Shen
Dacheng Tao
VLM
51
30
0
04 Jul 2022
Revisiting Label Smoothing and Knowledge Distillation Compatibility:
  What was Missing?
Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing?
Keshigeyan Chandrasegaran
Ngoc-Trung Tran
Yunqing Zhao
Ngai-man Cheung
78
41
0
29 Jun 2022
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution
Shwai He
Chenbo Jiang
Daize Dong
Liang Ding
28
5
0
05 Apr 2022
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
147
419
0
19 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,843
0
18 Apr 2021
Learning Student-Friendly Teacher Networks for Knowledge Distillation
Learning Student-Friendly Teacher Networks for Knowledge Distillation
D. Park
Moonsu Cha
C. Jeong
Daesin Kim
Bohyung Han
113
100
0
12 Feb 2021
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
221
197
0
07 Feb 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
1