Non-vacuous Generalization Bounds for Deep Neural Networks without any modification to the trained models
Deep neural network (NN) with millions or billions of parameters can perform really well on unseen data, after being trained from a finite training set. Various prior theories have been developed to explain such excellent ability of NNs, but do not provide a meaningful bound on the test error. Some recent theories, based on PAC-Bayes and mutual information, are non-vacuous and hence show a great potential to explain the excellent performance of NNs. However, they often require a stringent assumption and extensive modification (e.g. compression, quantization) to the trained model of interest. Therefore, those prior theories provide a guarantee for the modified versions only. In this paper, we propose two novel bounds on the test error of a model. Our bounds uses the training set only and require no modification to the model. Those bounds are verified on a large class of modern NNs, pretrained by Pytorch on the ImageNet dataset, and are non-vacuous. To the best of our knowledge, these are the first non-vacuous bounds at this large scale, without any modification to the pretrained models.
View on arXiv@article{than2025_2503.07325, title={ Non-vacuous Generalization Bounds for Deep Neural Networks without any modification to the trained models }, author={ Khoat Than and Dat Phan }, journal={arXiv preprint arXiv:2503.07325}, year={ 2025 } }