ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.12072
  4. Cited By
How and When Adversarial Robustness Transfers in Knowledge Distillation?

How and When Adversarial Robustness Transfers in Knowledge Distillation?

22 October 2021
Rulin Shao
Ming Zhou
C. Bezemer
Cho-Jui Hsieh
    AAML
ArXivPDFHTML

Papers citing "How and When Adversarial Robustness Transfers in Knowledge Distillation?"

3 / 3 papers shown
Title
Releasing Inequality Phenomena in $L_{\infty}$-Adversarial Training via
  Input Gradient Distillation
Releasing Inequality Phenomena in L∞L_{\infty}L∞​-Adversarial Training via Input Gradient Distillation
Junxi Chen
Junhao Dong
Xiaohua Xie
AAML
16
0
0
16 May 2023
Maximum Likelihood Distillation for Robust Modulation Classification
Maximum Likelihood Distillation for Robust Modulation Classification
Javier Maroto
Gérôme Bovet
P. Frossard
AAML
13
5
0
01 Nov 2022
Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
F. Khan
Ming-Hsuan Yang
ViT
256
620
0
21 May 2021
1