Active Bias: Training a More Accurate Neural Network by Emphasizing High
Variance Samples
This paper addresses the limitations of previous training methods that emphasize either easy examples like self-paced learning or difficult examples like hard example mining. Inspired by active learning, we propose two alternatives to re-weight training samples based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of mini-batch SGD, and the proximity of the correct class probability to the decision threshold (or threshold closeness). Extensive experimental results on multiple datasets show that our methods reliably improve accuracy in various network architectures, including providing additional gains on top of other popular training tools, such as ADAM, dropout, and distillation.
View on arXiv