33
2

PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection

Wei Li
Pin-Yu Chen
Sijia Liu
Ren Wang
Abstract

Deep neural networks are susceptible to backdoor attacks, where adversaries manipulate model predictions by inserting malicious samples into the training data. Currently, there is still a significant challenge in identifying suspicious training data to unveil potential backdoor samples. In this paper, we propose a novel method, Prediction Shift Backdoor Detection (PSBD), leveraging an uncertainty-based approach requiring minimal unlabeled clean validation data. PSBD is motivated by an intriguing Prediction Shift (PS) phenomenon, where poisoned models' predictions on clean data often shift away from true labels towards certain other labels with dropout applied during inference, while backdoor samples exhibit less PS. We hypothesize PS results from the neuron bias effect, making neurons favor features of certain classes. PSBD identifies backdoor training samples by computing the Prediction Shift Uncertainty (PSU), the variance in probability values when dropout layers are toggled on and off during model inference. Extensive experiments have been conducted to verify the effectiveness and efficiency of PSBD, which achieves state-of-the-art results among mainstream detection methods. The code is available atthis https URL.

View on arXiv
@article{li2025_2406.05826,
  title={ PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection },
  author={ Wei Li and Pin-Yu Chen and Sijia Liu and Ren Wang },
  journal={arXiv preprint arXiv:2406.05826},
  year={ 2025 }
}
Comments on this paper