38
1

DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks

Abstract

Fault injection attacks are a potent threat against embedded implementations of neural network models. Several attack vectors have been proposed, such as misclassification, model extraction, and trojan/backdoor planting. Most of these attacks work by flipping bits in the memory where quantized model parameters are stored. In this paper, we introduce an encoding-based protection method against bit-flip attacks on neural networks, titled DeepNcode. We experimentally evaluate our proposal with several publicly available models and datasets, by using state-of-the-art bit-flip attacks: BFA, T-BFA, and TA-LBF. Our results show an increase in protection margin of up to 7.6×7.6\times for 44-bit and 12.4×12.4\times for 88-bit quantized networks. Memory overheads start at 50%50\% of the original network size, while the time overheads are negligible. Moreover, DeepNcode does not require retraining and does not change the original accuracy of the model.

View on arXiv
Comments on this paper