ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.04928
  4. Cited By
Reliable Adversarial Distillation with Unreliable Teachers

Reliable Adversarial Distillation with Unreliable Teachers

9 June 2021
Jianing Zhu
Jiangchao Yao
Bo Han
Jingfeng Zhang
Tongliang Liu
Gang Niu
Jingren Zhou
Jianliang Xu
Hongxia Yang
    AAML
ArXivPDFHTML

Papers citing "Reliable Adversarial Distillation with Unreliable Teachers"

11 / 11 papers shown
Title
Revisiting the Relationship between Adversarial and Clean Training: Why Clean Training Can Make Adversarial Training Better
Revisiting the Relationship between Adversarial and Clean Training: Why Clean Training Can Make Adversarial Training Better
MingWei Zhou
Xiaobing Pei
AAML
132
0
0
30 Mar 2025
Long-tailed Adversarial Training with Self-Distillation
Seungju Cho
Hongsin Lee
Changick Kim
AAML
TTA
159
0
0
09 Mar 2025
Adversarial Prompt Distillation for Vision-Language Models
Adversarial Prompt Distillation for Vision-Language Models
Lin Luo
Xin Wang
Bojia Zi
Shihao Zhao
Xingjun Ma
Yu-Gang Jiang
AAML
VLM
79
1
0
22 Nov 2024
Dynamic Guidance Adversarial Distillation with Enhanced Teacher
  Knowledge
Dynamic Guidance Adversarial Distillation with Enhanced Teacher Knowledge
Hyejin Park
Dongbo Min
AAML
34
2
0
03 Sep 2024
Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off
Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off
Futa Waseda
Ching-Chun Chang
Isao Echizen
AAML
29
0
0
22 Feb 2024
Conserve-Update-Revise to Cure Generalization and Robustness Trade-off
  in Adversarial Training
Conserve-Update-Revise to Cure Generalization and Robustness Trade-off in Adversarial Training
Shruthi Gowda
Bahram Zonooz
Elahe Arani
AAML
28
2
0
26 Jan 2024
Indirect Gradient Matching for Adversarial Robust Distillation
Indirect Gradient Matching for Adversarial Robust Distillation
Hongsin Lee
Seungju Cho
Changick Kim
AAML
FedML
48
2
0
06 Dec 2023
CAT:Collaborative Adversarial Training
CAT:Collaborative Adversarial Training
Xingbin Liu
Huafeng Kuang
Xianming Lin
Yongjian Wu
Rongrong Ji
AAML
17
4
0
27 Mar 2023
ARDIR: Improving Robustness using Knowledge Distillation of Internal
  Representation
ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation
Tomokatsu Takahashi
Masanori Yamada
Yuuki Yamanaka
Tomoya Yamashita
20
0
0
01 Nov 2022
Accelerating Certified Robustness Training via Knowledge Transfer
Accelerating Certified Robustness Training via Knowledge Transfer
Pratik Vaishnavi
Kevin Eykholt
Amir Rahmati
16
7
0
25 Oct 2022
On the Convergence and Robustness of Adversarial Training
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
192
345
0
15 Dec 2021
1