279
v1v2 (latest)

Semantic Robustness of Models of Source Code

IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER), 2020
Abstract

Deep neural networks are vulnerable to adversarial examples - small input perturbations that result in incorrect predictions. We study this problem for models of source code, where we want the network to be robust to source-code modifications that preserve code functionality. (1) We define a powerful adversary that can employ sequences of parametric, semantics-preserving program transformations; (2) we show how to perform adversarial training to learn models robust to such adversaries; (3) we conduct an evaluation on different languages and architectures, demonstrating significant quantitative gains in robustness.

View on arXiv
Comments on this paper