ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.06253
  4. Cited By
Up to 100$\times$ Faster Data-free Knowledge Distillation

Up to 100×\times× Faster Data-free Knowledge Distillation

12 December 2021
Gongfan Fang
Kanya Mo
Xinchao Wang
Jie Song
Shitao Bei
Haofei Zhang
Mingli Song
    DD
ArXivPDFHTML

Papers citing "Up to 100$\times$ Faster Data-free Knowledge Distillation"

4 / 4 papers shown
Title
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via
  Restricted Adversarial Distillation
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation
Zhicong Yan
Shenghong Li
Ruijie Zhao
Yuan Tian
Yuanyuan Zhao
AAML
36
11
0
13 Jun 2023
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks
Yu-Lin Zhuang
Lingjuan Lyu
Chuan Shi
Carl Yang
Lichao Sun
27
16
0
08 May 2022
Distilling Knowledge from Graph Convolutional Networks
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Mingli Song
Dacheng Tao
Xinchao Wang
157
226
0
23 Mar 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
323
11,681
0
09 Mar 2017
1