Retraining with Predicted Hard Labels Provably Increases Model Accuracy

The performance of a model trained with noisy labels is often improved by simply \textit{retraining} the model with its \textit{own predicted hard labels} (i.e., 1/0 labels). Yet, a detailed theoretical characterization of this phenomenon is lacking. In this paper, we theoretically analyze retraining in a linearly separable binary classification setting with randomly corrupted labels given to us and prove that retraining can improve the population accuracy obtained by initially training with the given (noisy) labels. To the best of our knowledge, this is the first such theoretical result. Retraining finds application in improving training with local label differential privacy (DP) which involves training with noisy labels. We empirically show that retraining selectively on the samples for which the predicted label matches the given label significantly improves label DP training at no extra privacy cost; we call this consensus-based retraining. As an example, when training ResNet-18 on CIFAR-100 with label DP, we obtain more than 6% improvement in accuracy with consensus-based retraining.
View on arXiv@article{das2025_2406.11206, title={ Retraining with Predicted Hard Labels Provably Increases Model Accuracy }, author={ Rudrajit Das and Inderjit S. Dhillon and Alessandro Epasto and Adel Javanmard and Jieming Mao and Vahab Mirrokni and Sujay Sanghavi and Peilin Zhong }, journal={arXiv preprint arXiv:2406.11206}, year={ 2025 } }