ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.05902
42
0

Defending Deep Neural Networks against Backdoor Attacks via Module Switching

8 April 2025
Weijun Li
Ansh Arora
Xuanli He
Mark Dras
Qiongkai Xu
    AAML
    MoMe
ArXivPDFHTML
Abstract

The exponential increase in the parameters of Deep Neural Networks (DNNs) has significantly raised the cost of independent training, particularly for resource-constrained entities. As a result, there is a growing reliance on open-source models. However, the opacity of training processes exacerbates security risks, making these models more vulnerable to malicious threats, such as backdoor attacks, while simultaneously complicating defense mechanisms. Merging homogeneous models has gained attention as a cost-effective post-training defense. However, we notice that existing strategies, such as weight averaging, only partially mitigate the influence of poisoned parameters and remain ineffective in disrupting the pervasive spurious correlations embedded across model parameters. We propose a novel module-switching strategy to break such spurious correlations within the model's propagation path. By leveraging evolutionary algorithms to optimize fusion strategies, we validate our approach against backdoor attacks targeting text and vision domains. Our method achieves effective backdoor mitigation even when incorporating a couple of compromised models, e.g., reducing the average attack success rate (ASR) to 22% compared to 31.9% with the best-performing baseline on SST-2.

View on arXiv
@article{li2025_2504.05902,
  title={ Defending Deep Neural Networks against Backdoor Attacks via Module Switching },
  author={ Weijun Li and Ansh Arora and Xuanli He and Mark Dras and Qiongkai Xu },
  journal={arXiv preprint arXiv:2504.05902},
  year={ 2025 }
}
Comments on this paper