Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.15712
Cited By
Knowledge Diffusion for Distillation
25 May 2023
Tao Huang
Yuan Zhang
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Chang Xu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Knowledge Diffusion for Distillation"
29 / 29 papers shown
Title
Scale-wise Distillation of Diffusion Models
Nikita Starodubcev
Denis Kuznedelev
Artem Babenko
Dmitry Baranchuk
DiffM
48
0
0
20 Mar 2025
Asymmetric Decision-Making in Online Knowledge Distillation:Unifying Consensus and Divergence
Zhaowei Chen
Borui Zhao
Yuchen Ge
Yuhao Chen
Renjie Song
Jiajun Liang
37
0
0
09 Mar 2025
ACAM-KD: Adaptive and Cooperative Attention Masking for Knowledge Distillation
Qizhen Lan
Qing Tian
VLM
52
0
0
08 Mar 2025
Cached Adaptive Token Merging: Dynamic Token Reduction and Redundant Computation Elimination in Diffusion Model
Omid Saghatchian
Atiyeh Gh. Moghadam
Ahmad Nickabadi
MoMe
39
1
0
03 Jan 2025
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation
Jiaming Lv
Haoyuan Yang
P. Li
69
1
0
11 Dec 2024
MAS-Attention: Memory-Aware Stream Processing for Attention Acceleration on Resource-Constrained Edge Devices
Mohammadali Shakerdargah
Shan Lu
Chao Gao
Di Niu
70
0
0
20 Nov 2024
Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Kirill Lukyanov
Andrew Perminov
D. Turdakov
Mikhail Pautov
AAML
14
0
0
21 Oct 2024
Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher
Yong Guo
Shulian Zhang
Haolin Pan
Jing Liu
Yulun Zhang
Jian Chen
25
0
0
05 Oct 2024
CA-BERT: Leveraging Context Awareness for Enhanced Multi-Turn Chat Interaction
Minghao Liu
Mingxiu Sui
Yi Nan
Cangqing Wang
Zhijie Zhou
25
2
0
05 Sep 2024
Computer Vision Model Compression Techniques for Embedded Systems: A Survey
Alexandre Lopes
Fernando Pereira dos Santos
D. Oliveira
Mauricio Schiezaro
Hélio Pedrini
21
5
0
15 Aug 2024
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation
Jinbin Huang
Wenbin He
Liang Gou
Liu Ren
Chris Bryan
25
0
0
25 Jun 2024
Teaching with Uncertainty: Unleashing the Potential of Knowledge Distillation in Object Detection
Junfei Yi
Jianxu Mao
Tengfei Liu
Mingjie Li
Hanyu Gu
Hui Zhang
Xiaojun Chang
Yaonan Wang
22
0
0
11 Jun 2024
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
Sicen Guo
Zhiyuan Wu
Qijun Chen
Ioannis Pitas
Rui Fan
Rui Fan
19
1
0
13 Mar 2024
Attention-guided Feature Distillation for Semantic Segmentation
Amir M. Mansourian
Arya Jalali
Rozhan Ahmadi
S. Kasaei
14
0
0
08 Mar 2024
Model Compression Method for S4 with Diagonal State Space Layers using Balanced Truncation
Haruka Ezoe
Kazuhiro Sato
13
0
0
25 Feb 2024
Precise Knowledge Transfer via Flow Matching
Shitong Shao
Zhiqiang Shen
Linrui Gong
Huanran Chen
Xu Dai
8
1
0
03 Feb 2024
Feature Denoising Diffusion Model for Blind Image Quality Assessment
Xudong Li
Jingyuan Zheng
Runze Hu
Yan Zhang
Ke Li
...
Xiawu Zheng
Yutao Liu
Shengchuan Zhang
Pingyang Dai
Rongrong Ji
DiffM
28
0
0
22 Jan 2024
Generative Denoise Distillation: Simple Stochastic Noises Induce Efficient Knowledge Transfer for Dense Prediction
Zhaoge Liu
Xiaohao Xu
Yunkang Cao
Weiming Shen
VLM
8
0
0
16 Jan 2024
Cloud-Device Collaborative Learning for Multimodal Large Language Models
Guanqun Wang
Jiaming Liu
Chenxuan Li
Junpeng Ma
Yuan Zhang
...
Kevin Zhang
Maurice Chong
Ray Zhang
Yijiang Liu
Shanghang Zhang
34
7
0
26 Dec 2023
FreeKD: Knowledge Distillation via Semantic Frequency Prompt
Yuan Zhang
Tao Huang
Jiaming Liu
Tao Jiang
Kuan Cheng
Shanghang Zhang
AAML
14
10
0
20 Nov 2023
On the Design Fundamentals of Diffusion Models: A Survey
Ziyi Chang
G. Koulieris
Hubert P. H. Shum
DiffM
13
50
0
07 Jun 2023
Stable Diffusion is Unstable
Chengbin Du
Yanxi Li
Zhongwei Qiu
Chang Xu
DiffM
20
17
0
05 Jun 2023
Avatar Knowledge Distillation: Self-ensemble Teacher Paradigm with Uncertainty
Yuan Zhang
Weihua Chen
Yichen Lu
Tao Huang
Xiuyu Sun
Jian Cao
44
8
0
04 May 2023
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Xin-Chun Li
Wenxuan Fan
Shaoming Song
Yinchuan Li
Bingshuai Li
Yunfeng Shao
De-Chuan Zhan
16
29
0
10 Oct 2022
ResNet strikes back: An improved training procedure in timm
Ross Wightman
Hugo Touvron
Hervé Jégou
AI4TS
198
477
0
01 Oct 2021
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
144
308
0
19 Apr 2021
Learning Student-Friendly Teacher Networks for Knowledge Distillation
D. Park
Moonsu Cha
C. Jeong
Daesin Kim
Bohyung Han
111
99
0
12 Feb 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,214
0
17 Apr 2017
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
261
10,106
0
16 Nov 2016
1