Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.04928
Cited By
Reliable Adversarial Distillation with Unreliable Teachers
9 June 2021
Jianing Zhu
Jiangchao Yao
Bo Han
Jingfeng Zhang
Tongliang Liu
Gang Niu
Jingren Zhou
Jianliang Xu
Hongxia Yang
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Reliable Adversarial Distillation with Unreliable Teachers"
11 / 11 papers shown
Title
Revisiting the Relationship between Adversarial and Clean Training: Why Clean Training Can Make Adversarial Training Better
MingWei Zhou
Xiaobing Pei
AAML
132
0
0
30 Mar 2025
Long-tailed Adversarial Training with Self-Distillation
Seungju Cho
Hongsin Lee
Changick Kim
AAML
TTA
159
0
0
09 Mar 2025
Adversarial Prompt Distillation for Vision-Language Models
Lin Luo
Xin Wang
Bojia Zi
Shihao Zhao
Xingjun Ma
Yu-Gang Jiang
AAML
VLM
79
1
0
22 Nov 2024
Dynamic Guidance Adversarial Distillation with Enhanced Teacher Knowledge
Hyejin Park
Dongbo Min
AAML
34
2
0
03 Sep 2024
Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off
Futa Waseda
Ching-Chun Chang
Isao Echizen
AAML
29
0
0
22 Feb 2024
Conserve-Update-Revise to Cure Generalization and Robustness Trade-off in Adversarial Training
Shruthi Gowda
Bahram Zonooz
Elahe Arani
AAML
28
2
0
26 Jan 2024
Indirect Gradient Matching for Adversarial Robust Distillation
Hongsin Lee
Seungju Cho
Changick Kim
AAML
FedML
48
2
0
06 Dec 2023
CAT:Collaborative Adversarial Training
Xingbin Liu
Huafeng Kuang
Xianming Lin
Yongjian Wu
Rongrong Ji
AAML
17
4
0
27 Mar 2023
ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation
Tomokatsu Takahashi
Masanori Yamada
Yuuki Yamanaka
Tomoya Yamashita
20
0
0
01 Nov 2022
Accelerating Certified Robustness Training via Knowledge Transfer
Pratik Vaishnavi
Kevin Eykholt
Amir Rahmati
16
7
0
25 Oct 2022
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
192
345
0
15 Dec 2021
1