ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.01399
51
0

Leveraging Generalizability of Image-to-Image Translation for Enhanced Adversarial Defense

2 April 2025
Haibo Zhang
Zhihua Yao
Kouichi Sakurai
Takeshi Saitoh
    AAML
ArXivPDFHTML
Abstract

In the rapidly evolving field of artificial intelligence, machine learning emerges as a key technology characterized by its vast potential and inherent risks. The stability and reliability of these models are important, as they are frequent targets of security threats. Adversarial attacks, first rigorously defined by Ian Goodfellow et al. in 2013, highlight a critical vulnerability: they can trick machine learning models into making incorrect predictions by applying nearly invisible perturbations to images. Although many studies have focused on constructing sophisticated defensive mechanisms to mitigate such attacks, they often overlook the substantial time and computational costs of training and maintaining these models. Ideally, a defense method should be able to generalize across various, even unseen, adversarial attacks with minimal overhead. Building on our previous work on image-to-image translation-based defenses, this study introduces an improved model that incorporates residual blocks to enhance generalizability. The proposed method requires training only a single model, effectively defends against diverse attack types, and is well-transferable between different target models. Experiments show that our model can restore the classification accuracy from near zero to an average of 72\% while maintaining competitive performance compared to state-of-the-art methods.

View on arXiv
@article{zhang2025_2504.01399,
  title={ Leveraging Generalizability of Image-to-Image Translation for Enhanced Adversarial Defense },
  author={ Haibo Zhang and Zhihua Yao and Kouichi Sakurai and Takeshi Saitoh },
  journal={arXiv preprint arXiv:2504.01399},
  year={ 2025 }
}
Comments on this paper