Diverse Gaussian Noise Consistency Regularization for Robustness and
Uncertainty Calibration under Noise Domain Shifts
Deep neural networks achieve high prediction accuracy when the train and test distributions coincide. In practice though, various types of corruptions occur which deviate from this setup and cause severe performance degradations. Few methods have been proposed to address generalization in the presence of unforeseen domain shifts. In particular, digital noise corruptions arise commonly in practice during the image acquisition stage and present a significant challenge for current robustness approaches. In this paper, we propose a diverse Gaussian noise consistency regularization method for improving robustness of image classifiers under a variety of noise corruptions while still maintaining high clean accuracy. We derive bounds to motivate our Gaussian noise consistency regularization using a local loss landscape analysis. We show that this simple approach improves robustness against various unforeseen noise corruptions over standard and adversarial training and other strong baselines. Furthermore, when combined with diverse data augmentation techniques we empirically show this type of consistency regularization further improves robustness and uncertainty calibration for common corruptions upon the state-of-the-art for several image classification benchmarks.
View on arXiv