Multiplicative Reweighting for Robust Neural Network Optimization
- OODNoLa
Deep neural networks are widespread due to their powerful performance. Yet, they suffer from degraded performance in the presence of noisy labels at train time or adversarial examples during inference. Inspired by the setting of learning with expert advice, where multiplicative weights (MW) updates were recently shown to be robust to moderate adversarial corruptions, we propose to use MW for reweighting examples during neural networks optimization. We establish the convergence of our method when used with gradient descent and show its advantage in two simple examples. We then validate empirically our findings by demonstrating that MW improve networks accuracy in the presence of label noise on CIFAR-10, CIFAR-100 and Clothing1M, and leads to better robustness to adversarial attacks.
View on arXiv