Data augmentation instead of explicit regularization
Modern deep artificial neural networks have achieved impressive results through models with a very large number of parameters---compared to the number of training examples---that control overfitting with the help of regularization. Regularization can be implicit, as is the case of stochastic gradient descent and parameter sharing in convolutional layers, or explicit. Explicit regularization techniques, most common forms are weight decay and dropout, reduce the effective capacity of the model and typically require the use of deeper and wider architectures to compensate for the reduced capacity. Although these techniques have proven successful in terms of improved generalization, they seem to waste some capacity. In contrast, data augmentation techniques rely on increasing the number of training examples to improve generalization without reducing the effective capacity. Unlike weight decay and dropout, data augmentation is independent of the specific network architecture, since it is applied on the training data. In this paper we systematically compare data augmentation and explicit regularization on some popular architectures and data sets. Our results demonstrate that data augmentation alone can achieve the same performance or higher as regularized models and exhibits much higher adaptability to changes in the architecture and the amount of training data.
View on arXiv