Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.14290
Cited By
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation
28 February 2023
Gaurav Patel
Konda Reddy Mopuri
Qiang Qiu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation"
6 / 6 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via
D
\mathbf{\texttt{D}}
D
ual-
H
\mathbf{\texttt{H}}
H
ead
O
\mathbf{\texttt{O}}
O
ptimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
43
0
0
12 May 2025
Tuning Timestep-Distilled Diffusion Model Using Pairwise Sample Optimization
Zichen Miao
Zhengyuan Yang
Kevin Lin
Ze Wang
Zicheng Liu
Lijuan Wang
Qiang Qiu
40
3
0
04 Oct 2024
Sampling to Distill: Knowledge Transfer from Open-World Data
Yuzheng Wang
Zhaoyu Chen
Jie M. Zhang
Dingkang Yang
Zuhao Ge
Yang Liu
Siao Liu
Yunquan Sun
Wenqiang Zhang
Lizhe Qi
26
9
0
31 Jul 2023
Gradient Matching for Domain Generalization
Yuge Shi
Jeffrey S. Seely
Philip H. S. Torr
Siddharth Narayanaswamy
Awni Y. Hannun
Nicolas Usunier
Gabriel Synnaeve
OOD
205
246
0
20 Apr 2021
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Luca Franceschi
P. Frasconi
Saverio Salzo
Riccardo Grazzi
Massimiliano Pontil
99
715
0
13 Jun 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
260
11,677
0
09 Mar 2017
1