Blind Backdoors in Deep Learning Models
- AAMLFedMLSILM
We investigate a new method for injecting backdoors into machine learning models, based on poisoning the loss-value computation in the model-training code. We use it to demonstrate new classes of backdoors strictly more powerful than those in prior literature: single-pixel and physical backdoors in ImageNet models, backdoors that switch the model to a covert, privacy-violating task, and backdoors that do not require inference-time input modifications. Our attack is \emph{blind}: the attacker cannot modify the training data, nor observe the execution of his code, nor access the resulting model. Blind backdoor training uses multi-objective optimization to achieve high accuracy on both the main and backdoor tasks. Finally, we show how the blind attack can evade all known defenses, and propose new ones.
View on arXiv