Recovery Guarantees for Compressible Signals with Adversarial Noise

We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in \cite{bafna2018thwarting} to defend neural networks against -norm, -norm, and -norm attacks. Our results are general as they can be applied to most unitary transforms used in practice and hold for -norm, -norm, and -norm bounded noise. In the case of -norm noise, we prove recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For -norm bounded noise, we provide recovery guarantees for BP and for the case of -norm bounded noise, we provide recovery guarantees for Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in \cite{bafna2018thwarting} for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of , and norm attacks.
View on arXiv