Invisible Backdoor Attack against Self-supervised Learning

Self-supervised learning (SSL) models are vulnerable to backdoor attacks. Existing backdoor attacks that are effective in SSL often involve noticeable triggers, like colored patches or visible noise, which are vulnerable to human inspection. This paper proposes an imperceptible and effective backdoor attack against self-supervised models. We first find that existing imperceptible triggers designed for supervised learning are less effective in compromising self-supervised models. We then identify this ineffectiveness is attributed to the overlap in distributions between the backdoor and augmented samples used in SSL. Building on this insight, we design an attack using optimized triggers disentangled with the augmented transformation in the SSL, while remaining imperceptible to human vision. Experiments on five datasets and six SSL algorithms demonstrate our attack is highly effective and stealthy. It also has strong resistance to existing backdoor defenses. Our code can be found atthis https URL.
View on arXiv@article{zhang2025_2405.14672, title={ Invisible Backdoor Attack against Self-supervised Learning }, author={ Hanrong Zhang and Zhenting Wang and Boheng Li and Fulin Lin and Tingxu Han and Mingyu Jin and Chenlu Zhan and Mengnan Du and Hongwei Wang and Shiqing Ma }, journal={arXiv preprint arXiv:2405.14672}, year={ 2025 } }