TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep
Neural Networks
- AAML
Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference. Therefore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, they do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural testing. Therefore, in this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs. We present a case study on traffic sign detection using the VGGNet and the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both `subjective' and `objective' quality tests
View on arXiv