ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.07159
  4. Cited By
On the benefits of knowledge distillation for adversarial robustness

On the benefits of knowledge distillation for adversarial robustness

14 March 2022
Javier Maroto
Guillermo Ortiz-Jiménez
P. Frossard
    AAML
    FedML
ArXivPDFHTML

Papers citing "On the benefits of knowledge distillation for adversarial robustness"

14 / 14 papers shown
Title
Long-tailed Adversarial Training with Self-Distillation
Seungju Cho
Hongsin Lee
Changick Kim
AAML
TTA
118
0
0
09 Mar 2025
Adversarial Prompt Distillation for Vision-Language Models
Adversarial Prompt Distillation for Vision-Language Models
Lin Luo
Xin Wang
Bojia Zi
Shihao Zhao
Xingjun Ma
Yu-Gang Jiang
AAML
VLM
79
1
0
22 Nov 2024
Adversarial Training: A Survey
Adversarial Training: A Survey
Mengnan Zhao
Lihe Zhang
Jingwen Ye
Huchuan Lu
Baocai Yin
Xinchao Wang
AAML
23
0
0
19 Oct 2024
Dynamic Guidance Adversarial Distillation with Enhanced Teacher
  Knowledge
Dynamic Guidance Adversarial Distillation with Enhanced Teacher Knowledge
Hyejin Park
Dongbo Min
AAML
26
2
0
03 Sep 2024
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via
  Channels Relational Graph
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via Channels Relational Graph
Zhiwei Wang
Jun Huang
Longhua Ma
Chengyu Wu
Hongyu Ma
35
0
0
14 May 2024
Robust feature knowledge distillation for enhanced performance of
  lightweight crack segmentation models
Robust feature knowledge distillation for enhanced performance of lightweight crack segmentation models
Zhaohui Chen
Elyas Asadi Shamsabadi
Sheng Jiang
Luming Shen
Daniel Dias-da-Costa
29
2
0
09 Apr 2024
Adversarial Fine-tuning of Compressed Neural Networks for Joint
  Improvement of Robustness and Efficiency
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiency
Hallgrimur Thorsteinsson
Valdemar J Henriksen
Tong Chen
Raghavendra Selvan
AAML
29
1
0
14 Mar 2024
Distilling Adversarial Robustness Using Heterogeneous Teachers
Distilling Adversarial Robustness Using Heterogeneous Teachers
Jieren Deng
A. Palmer
Rigel Mahmood
Ethan Rathbun
Jinbo Bi
Kaleel Mahmood
Derek Aguiar
AAML
27
0
0
23 Feb 2024
Indirect Gradient Matching for Adversarial Robust Distillation
Indirect Gradient Matching for Adversarial Robust Distillation
Hongsin Lee
Seungju Cho
Changick Kim
AAML
FedML
48
2
0
06 Dec 2023
Distilling Universal and Joint Knowledge for Cross-Domain Model
  Compression on Time Series Data
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data
Qing Xu
Min-man Wu
Xiaoli Li
K. Mao
Zhenghua Chen
14
5
0
07 Jul 2023
Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic
  Adversarial Training
Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic Adversarial Training
Fan Liu
Weijiao Zhang
Haowen Liu
AI4TS
OOD
10
9
0
25 Jun 2023
AdvFunMatch: When Consistent Teaching Meets Adversarial Robustness
AdvFunMatch: When Consistent Teaching Meets Adversarial Robustness
Ziuhi Wu
Haichang Gao
Bingqian Zhou
Ping Wang
AAML
16
0
0
24 May 2023
A Generative Framework for Low-Cost Result Validation of Machine
  Learning-as-a-Service Inference
A Generative Framework for Low-Cost Result Validation of Machine Learning-as-a-Service Inference
Abhinav Kumar
Miguel A. Guirao Aguilera
R. Tourani
S. Misra
AAML
11
0
0
31 Mar 2023
Maximum Likelihood Distillation for Robust Modulation Classification
Maximum Likelihood Distillation for Robust Modulation Classification
Javier Maroto
Gérôme Bovet
P. Frossard
AAML
11
5
0
01 Nov 2022
1