Leveraging Gradients for Unsupervised Accuracy Estimation under Distribution Shift

Estimating the test performance of a model, possibly under distribution shift, without having access to the ground-truth labels is a challenging, yet very important problem for the safe deployment of machine learning algorithms in the wild. Existing works mostly rely on information from either the outputs or the extracted features of neural networks to estimate a score that correlates with the ground-truth test accuracy. In this paper, we investigate -- both empirically and theoretically -- how the information provided by the gradients can be predictive of the ground-truth test accuracy even under distribution shifts. More specifically, we use the norm of classification-layer gradients, backpropagated from the cross-entropy loss after only one gradient step over test data. Our intuition is that these gradients should be of higher magnitude when the model generalizes poorly. We provide the theoretical insights behind our approach and the key ingredients that ensure its empirical success. Extensive experiments conducted with various architectures on diverse distribution shifts demonstrate that our method significantly outperforms current state-of-the-art approaches. The code is available atthis https URL
View on arXiv@article{xie2025_2401.08909, title={ Leveraging Gradients for Unsupervised Accuracy Estimation under Distribution Shift }, author={ Renchunzi Xie and Ambroise Odonnat and Vasilii Feofanov and Ievgen Redko and Jianfeng Zhang and Bo An }, journal={arXiv preprint arXiv:2401.08909}, year={ 2025 } }