ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.01529
  4. Cited By
Masked Generative Distillation

Masked Generative Distillation

3 May 2022
Zhendong Yang
Zhe Li
Mingqi Shao
Dachuan Shi
Zehuan Yuan
Chun Yuan
    FedML
ArXivPDFHTML

Papers citing "Masked Generative Distillation"

50 / 75 papers shown
Title
ACAM-KD: Adaptive and Cooperative Attention Masking for Knowledge Distillation
Qizhen Lan
Qing Tian
VLM
54
0
0
08 Mar 2025
CLoCKDistill: Consistent Location-and-Context-aware Knowledge Distillation for DETRs
CLoCKDistill: Consistent Location-and-Context-aware Knowledge Distillation for DETRs
Qizhen Lan
Qing Tian
47
0
0
15 Feb 2025
Multi-Level Decoupled Relational Distillation for Heterogeneous Architectures
Yaoxin Yang
Peng Ye
Weihao Lin
Kangcong Li
Yan Wen
Jia Hao
Tao Chen
33
0
0
10 Feb 2025
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge
  Distillation
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation
Jiaming Lv
Haoyuan Yang
P. Li
69
1
0
11 Dec 2024
Decoupling Dark Knowledge via Block-wise Logit Distillation for
  Feature-level Alignment
Decoupling Dark Knowledge via Block-wise Logit Distillation for Feature-level Alignment
Chengting Yu
Fengzhao Zhang
Ruizhe Chen
Zuozhu Liu
Shurun Tan
Er-ping Li
Aili Wang
28
2
0
03 Nov 2024
SNN-PAR: Energy Efficient Pedestrian Attribute Recognition via Spiking
  Neural Networks
SNN-PAR: Energy Efficient Pedestrian Attribute Recognition via Spiking Neural Networks
Haiyang Wang
Qian Zhu
Mowen She
Yabo Li
Haoyu Song
Minghe Xu
Xiao Wang
ViT
26
0
0
10 Oct 2024
Kendall's $τ$ Coefficient for Logits Distillation
Kendall's τττ Coefficient for Logits Distillation
Yuchen Guan
Runxi Cheng
Kang Liu
Chun Yuan
18
0
0
26 Sep 2024
MedDet: Generative Adversarial Distillation for Efficient Cervical Disc
  Herniation Detection
MedDet: Generative Adversarial Distillation for Efficient Cervical Disc Herniation Detection
Zeyu Zhang
Nengmin Yi
Shengbo Tan
Ying Cai
Yi Yang
Lei Xu
Qingtai Li
Zhang Yi
Daji Ergu
Yang Zhao
MedIm
19
8
0
30 Aug 2024
Masked Image Modeling: A Survey
Masked Image Modeling: A Survey
Vlad Hondru
Florinel-Alin Croitoru
Shervin Minaee
Radu Tudor Ionescu
N. Sebe
53
6
0
13 Aug 2024
DFMSD: Dual Feature Masking Stage-wise Knowledge Distillation for Object
  Detection
DFMSD: Dual Feature Masking Stage-wise Knowledge Distillation for Object Detection
Zhourui Zhang
Jun Li
Zhijian Wu
Jifeng Shen
Jianhua Xu
28
0
0
18 Jul 2024
Reprogramming Distillation for Medical Foundation Models
Reprogramming Distillation for Medical Foundation Models
Yuhang Zhou
Siyuan Du
Haolin Li
Jiangchao Yao
Ya Zhang
Yanfeng Wang
41
1
0
09 Jul 2024
Teaching with Uncertainty: Unleashing the Potential of Knowledge
  Distillation in Object Detection
Teaching with Uncertainty: Unleashing the Potential of Knowledge Distillation in Object Detection
Junfei Yi
Jianxu Mao
Tengfei Liu
Mingjie Li
Hanyu Gu
Hui Zhang
Xiaojun Chang
Yaonan Wang
28
0
0
11 Jun 2024
Aligning in a Compact Space: Contrastive Knowledge Distillation between
  Heterogeneous Architectures
Aligning in a Compact Space: Contrastive Knowledge Distillation between Heterogeneous Architectures
Hongjun Wu
Li Xiao
Xingkuo Zhang
Yining Miao
25
1
0
28 May 2024
Label-efficient Semantic Scene Completion with Scribble Annotations
Label-efficient Semantic Scene Completion with Scribble Annotations
Song Wang
Jiawei Yu
Wentong Li
Hao Shi
Kailun Yang
Junbo Chen
Jianke Zhu
27
5
0
24 May 2024
AMFD: Distillation via Adaptive Multimodal Fusion for Multispectral
  Pedestrian Detection
AMFD: Distillation via Adaptive Multimodal Fusion for Multispectral Pedestrian Detection
Zizhao Chen
Yeqiang Qian
Xiaoxiao Yang
Chunxiang Wang
Ming Yang
24
1
0
21 May 2024
MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and
  Modalities
MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities
Kunxi Li
Tianyu Zhan
Kairui Fu
Shengyu Zhang
Kun Kuang
Jiwei Li
Zhou Zhao
Fei Wu
MoMe
19
0
0
20 Apr 2024
Not All Voxels Are Equal: Hardness-Aware Semantic Scene Completion with
  Self-Distillation
Not All Voxels Are Equal: Hardness-Aware Semantic Scene Completion with Self-Distillation
Song Wang
Jiawei Yu
Wentong Li
Wenyu Liu
Xiaolu Liu
Junbo Chen
Jianke Zhu
39
17
0
18 Apr 2024
Task Integration Distillation for Object Detectors
Task Integration Distillation for Object Detectors
Hai Su
ZhenWen Jian
Songsen Yu
33
1
0
02 Apr 2024
Attention-guided Feature Distillation for Semantic Segmentation
Attention-guided Feature Distillation for Semantic Segmentation
Amir M. Mansourian
Arya Jalali
Rozhan Ahmadi
S. Kasaei
20
0
0
08 Mar 2024
Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via
  Selective Entropy Distillation
Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation
Yaofo Chen
Shuaicheng Niu
Yaowei Wang
Shoukai Xu
Hengjie Song
Mingkui Tan
19
6
0
27 Feb 2024
Precise Knowledge Transfer via Flow Matching
Precise Knowledge Transfer via Flow Matching
Shitong Shao
Zhiqiang Shen
Linrui Gong
Huanran Chen
Xu Dai
10
1
0
03 Feb 2024
Feature Denoising Diffusion Model for Blind Image Quality Assessment
Feature Denoising Diffusion Model for Blind Image Quality Assessment
Xudong Li
Jingyuan Zheng
Runze Hu
Yan Zhang
Ke Li
...
Xiawu Zheng
Yutao Liu
Shengchuan Zhang
Pingyang Dai
Rongrong Ji
DiffM
36
0
0
22 Jan 2024
Rethinking Centered Kernel Alignment in Knowledge Distillation
Rethinking Centered Kernel Alignment in Knowledge Distillation
Zikai Zhou
Yunhang Shen
Shitong Shao
Linrui Gong
Shaohui Lin
16
1
0
22 Jan 2024
Generative Denoise Distillation: Simple Stochastic Noises Induce
  Efficient Knowledge Transfer for Dense Prediction
Generative Denoise Distillation: Simple Stochastic Noises Induce Efficient Knowledge Transfer for Dense Prediction
Zhaoge Liu
Xiaohao Xu
Yunkang Cao
Weiming Shen
VLM
10
0
0
16 Jan 2024
Graph Relation Distillation for Efficient Biomedical Instance
  Segmentation
Graph Relation Distillation for Efficient Biomedical Instance Segmentation
Xiaoyu Liu
Yueyi Zhang
Zhiwei Xiong
Wei Huang
Bo Hu
Xiaoyan Sun
Feng Wu
24
0
0
12 Jan 2024
Distilling Temporal Knowledge with Masked Feature Reconstruction for 3D
  Object Detection
Distilling Temporal Knowledge with Masked Feature Reconstruction for 3D Object Detection
Haowen Zheng
Dong Cao
Jintao Xu
Rui Ai
Weihao Gu
Yang Yang
Yanyan Liang
28
1
0
03 Jan 2024
RdimKD: Generic Distillation Paradigm by Dimensionality Reduction
RdimKD: Generic Distillation Paradigm by Dimensionality Reduction
Yi Guo
Yiqian He
Xiaoyang Li
Haotong Qin
Van Tung Pham
Yang Zhang
Shouda Liu
29
1
0
14 Dec 2023
Semi-supervised Semantic Segmentation Meets Masked Modeling:Fine-grained
  Locality Learning Matters in Consistency Regularization
Semi-supervised Semantic Segmentation Meets Masked Modeling:Fine-grained Locality Learning Matters in Consistency Regularization
W. Pan
Zhe Xu
Jiangpeng Yan
Zihan Wu
R. Tong
Xiu Li
Jianhua Yao
ISeg
16
1
0
14 Dec 2023
Spatial-wise Dynamic Distillation for MLP-like Efficient Visual Fault
  Detection of Freight Trains
Spatial-wise Dynamic Distillation for MLP-like Efficient Visual Fault Detection of Freight Trains
Yang Zhang
Huilin Pan
Mingying Li
An-Chi Wang
Yang Zhou
Hongliang Ren
15
1
0
10 Dec 2023
Augmentation-Free Dense Contrastive Knowledge Distillation for Efficient
  Semantic Segmentation
Augmentation-Free Dense Contrastive Knowledge Distillation for Efficient Semantic Segmentation
Jiawei Fan
Chao Li
Xiaolong Liu
Meina Song
Anbang Yao
17
5
0
07 Dec 2023
PEA-Diffusion: Parameter-Efficient Adapter with Knowledge Distillation
  in non-English Text-to-Image Generation
PEA-Diffusion: Parameter-Efficient Adapter with Knowledge Distillation in non-English Text-to-Image Generation
Jiancang Ma
Chen Chen
Qingsong Xie
H. Lu
DiffM
VLM
20
3
0
28 Nov 2023
FreeKD: Knowledge Distillation via Semantic Frequency Prompt
FreeKD: Knowledge Distillation via Semantic Frequency Prompt
Yuan Zhang
Tao Huang
Jiaming Liu
Tao Jiang
Kuan Cheng
Shanghang Zhang
AAML
16
10
0
20 Nov 2023
Object-centric Cross-modal Feature Distillation for Event-based Object
  Detection
Object-centric Cross-modal Feature Distillation for Event-based Object Detection
Lei Li
Alexander Liniger
Mario Millhaeusler
Vagia Tsiminaki
Yuanyou Li
Dengxin Dai
23
4
0
09 Nov 2023
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
11
0
0
26 Oct 2023
Self-distilled Masked Attention guided masked image modeling with noise
  Regularized Teacher (SMART) for medical image analysis
Self-distilled Masked Attention guided masked image modeling with noise Regularized Teacher (SMART) for medical image analysis
Jue Jiang
Aneesh Rangnekar
Chloe Min Seo Choi
H. Veeraraghavan
MedIm
14
0
0
02 Oct 2023
Towards Comparable Knowledge Distillation in Semantic Image Segmentation
Towards Comparable Knowledge Distillation in Semantic Image Segmentation
Onno Niemann
Christopher Vox
Thorben Werner
VLM
11
1
0
07 Sep 2023
Knowledge Distillation Layer that Lets the Student Decide
Knowledge Distillation Layer that Lets the Student Decide
Ada Gorgun
Y. Z. Gürbüz
Aydin Alatan
8
0
0
06 Sep 2023
DMKD: Improving Feature-based Knowledge Distillation for Object
  Detection Via Dual Masking Augmentation
DMKD: Improving Feature-based Knowledge Distillation for Object Detection Via Dual Masking Augmentation
Guangqi Yang
Yin Tang
Zhijian Wu
Jun Yu Li
Jianhua Xu
Xili Wan
9
3
0
06 Sep 2023
Bridging Cross-task Protocol Inconsistency for Distillation in Dense
  Object Detection
Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection
Longrong Yang
Xianpan Zhou
Xuewei Li
Liang Qiao
Zheyang Li
Zi-Liang Yang
Gaoang Wang
Xi Li
16
4
0
28 Aug 2023
Learning Lightweight Object Detectors via Multi-Teacher Progressive
  Distillation
Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation
Shengcao Cao
Mengtian Li
James Hays
Deva Ramanan
Yu-xiong Wang
Liangyan Gui
VLM
21
10
0
17 Aug 2023
One-stage Low-resolution Text Recognition with High-resolution Knowledge
  Transfer
One-stage Low-resolution Text Recognition with High-resolution Knowledge Transfer
Han Guo
Tao Dai
Mingyan Zhu
G. MEng
Bin Chen
Zhi Wang
Shutao Xia
21
1
0
05 Aug 2023
NormKD: Normalized Logits for Knowledge Distillation
NormKD: Normalized Logits for Knowledge Distillation
Zhihao Chi
Tu Zheng
Hengjia Li
Zheng Yang
Boxi Wu
Binbin Lin
D. Cai
11
13
0
01 Aug 2023
Effective Whole-body Pose Estimation with Two-stages Distillation
Effective Whole-body Pose Estimation with Two-stages Distillation
Zhendong Yang
Ailing Zeng
Chun Yuan
Yu Li
14
151
0
29 Jul 2023
BPKD: Boundary Privileged Knowledge Distillation For Semantic
  Segmentation
BPKD: Boundary Privileged Knowledge Distillation For Semantic Segmentation
Liyang Liu
Zihan Wang
M. Phan
Bowen Zhang
Jinchao Ge
Yifan Liu
11
7
0
13 Jun 2023
Are Large Kernels Better Teachers than Transformers for ConvNets?
Are Large Kernels Better Teachers than Transformers for ConvNets?
Tianjin Huang
Lu Yin
Zhenyu (Allen) Zhang
Lijuan Shen
Meng Fang
Mykola Pechenizkiy
Zhangyang Wang
Shiwei Liu
19
13
0
30 May 2023
Align, Perturb and Decouple: Toward Better Leverage of Difference
  Information for RSI Change Detection
Align, Perturb and Decouple: Toward Better Leverage of Difference Information for RSI Change Detection
Supeng Wang
Yuxi Li
Ming-Kun Xie
M. Chi
Yabiao Wang
Chengjie Wang
Wenjie Zhu
15
7
0
30 May 2023
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating
  Vision-Language Transformers
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Dachuan Shi
Chaofan Tao
Anyi Rao
Zhendong Yang
Chun Yuan
Jiaqi Wang
VLM
20
22
0
27 May 2023
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from
  Small Scale to Large Scale
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale
Zhiwei Hao
Jianyuan Guo
Kai Han
Han Hu
Chang Xu
Yunhe Wang
17
14
0
25 May 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
NORM: Knowledge Distillation via N-to-One Representation Matching
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
36
66
0
23 May 2023
CM-MaskSD: Cross-Modality Masked Self-Distillation for Referring Image
  Segmentation
CM-MaskSD: Cross-Modality Masked Self-Distillation for Referring Image Segmentation
Wenxuan Wang
Jing Liu
Xingjian He
Yisi Zhang
Cheng Chen
Jiachen Shen
Yan Zhang
Jiangyun Li
14
8
0
19 May 2023
12
Next