ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.03286
31
2

Indirect Gradient Matching for Adversarial Robust Distillation

6 December 2023
Hongsin Lee
Seungju Cho
Changick Kim
    AAML
    FedML
ArXivPDFHTML
Abstract

Adversarial training significantly improves adversarial robustness, but superior performance is primarily attained with large models. This substantial performance gap for smaller models has spurred active research into adversarial distillation (AD) to mitigate the difference. Existing AD methods leverage the teacher's logits as a guide. In contrast to these approaches, we aim to transfer another piece of knowledge from the teacher, the input gradient. In this paper, we propose a distillation module termed Indirect Gradient Distillation Module (IGDM) that indirectly matches the student's input gradient with that of the teacher. Experimental results show that IGDM seamlessly integrates with existing AD methods, significantly enhancing their performance. Particularly, utilizing IGDM on the CIFAR-100 dataset improves the AutoAttack accuracy from 28.06% to 30.32% with the ResNet-18 architecture and from 26.18% to 29.32% with the MobileNetV2 architecture when integrated into the SOTA method without additional data augmentation.

View on arXiv
@article{lee2025_2312.03286,
  title={ Indirect Gradient Matching for Adversarial Robust Distillation },
  author={ Hongsin Lee and Seungju Cho and Changick Kim },
  journal={arXiv preprint arXiv:2312.03286},
  year={ 2025 }
}
Comments on this paper