268

Memorization and Generalization in Neural Code Intelligence Models

Abstract

Deep Neural Networks are capable of learning highly generalizable patterns from large datasets of source code through millions of parameters. This large capacity also renders them prone to memorizing data points. Recent work suggests that the memorization risk manifests especially strongly when the training dataset is noisy, involving many ambiguous or questionable samples, and memorization is the only recourse. Unfortunately, most code intelligence tasks rely on rather noise-prone and repetitive data sources, such as code from GitHub. Given the sheer size of such corpora, determining the role and extent of noise in these is beyond manual inspection. In this paper, we propose an alternative analysis: we evaluate the impact of the noise on training neural models of source code by introducing targeted noise to the dataset of several state-of-the-art neural intelligence models and benchmarks based on Java and Python codebases. By studying the resulting behavioral changes at various rates of noise, and across a wide range of metrics, we can characterize both typical generalizing and problematic memorization-like learning of models of source code. Our results highlight important risks: millions of trainable parameters allow the neural networks to memorize anything, including noisy data, and provide a false sense of generalization. At the same time, the metrics used to analyze this phenomenon proved surprisingly useful for detecting and quantifying such effects, offering a powerful toolset for creating reliable models of code.

View on arXiv
Comments on this paper