MergeGuard: Efficient Thwarting of Trojan Attacks in Machine Learning Models

Abstract
This paper proposes MergeGuard, a novel methodology for mitigation of AI Trojan attacks. Trojan attacks on AI models cause inputs embedded with triggers to be misclassified to an adversary's target class, posing a significant threat to model usability trained by an untrusted third party. The core of MergeGuard is a new post-training methodology for linearizing and merging fully connected layers which we show simultaneously improves model generalizability and performance. Our Proof of Concept evaluation on Transformer models demonstrates that MergeGuard maintains model accuracy while decreasing trojan attack success rate, outperforming commonly used (post-training) Trojan mitigation by fine-tuning methodologies.
View on arXiv@article{shabgahi2025_2505.04015, title={ MergeGuard: Efficient Thwarting of Trojan Attacks in Machine Learning Models }, author={ Soheil Zibakhsh Shabgahi and Yaman Jandali and Farinaz Koushanfar }, journal={arXiv preprint arXiv:2505.04015}, year={ 2025 } }
Comments on this paper