Carefully Blending Adversarial Training, Purification, and Aggregation Improves Adversarial Robustness

In this work, we propose a novel adversarial defence mechanism for image classification - CARSO - blending the paradigms of adversarial training and adversarial purification in a synergistic robustness-enhancing way. The method builds upon an adversarially-trained classifier, and learns to map its internal representation associated with a potentially perturbed input onto a distribution of tentative clean reconstructions. Multiple samples from such distribution are classified by the same adversarially-trained model, and a carefully chosen aggregation of its outputs finally constitutes the robust prediction of interest. Experimental evaluation by a well-established benchmark of strong adaptive attacks, across different image datasets, shows that CARSO is able to defend itself against adaptive end-to-end white-box attacks devised for stochastic defences. Paying a modest clean accuracy toll, our method improves by a significant margin the state-of-the-art for Cifar-10, Cifar-100, and TinyImageNet-200 robust classification accuracy against AutoAttack. Code, and instructions to obtain pre-trained models are available at:this https URL.
View on arXiv@article{ballarin2025_2306.06081, title={ Carefully Blending Adversarial Training, Purification, and Aggregation Improves Adversarial Robustness }, author={ Emanuele Ballarin and Alessio Ansuini and Luca Bortolussi }, journal={arXiv preprint arXiv:2306.06081}, year={ 2025 } }