72
8

De-randomized PAC-Bayes Margin Bounds: Applications to Non-convex and Non-smooth Predictors

Abstract

In spite of several notable efforts, explaining the generalization of deterministic non-smooth deep nets, e.g., ReLU-nets, has remained challenging. Existing approaches for non-smooth deep nets typically need to bound the Lipschitz constant of such deep nets but such bounds are quite large, may even increase with the training set size yielding vacuous generalization bounds. In this paper, we present a new de-randomized PAC-Bayes margin bound for deterministic non-convex and non-smooth predictors, e.g., ReLU-nets. The bound depends on a trade-off between the L2L_2-norm of the weights and the effective curvature (`flatness') of the predictor, avoids any dependency on the Lipschitz constant, and yields meaningful (decreasing) bounds with the increase in training set size. We first develop a de-randomization argument for non-convex but smooth predictors, e.g., linear deep networks (LDNs). We then consider non-smooth predictors which for any given input realizes as a smooth predictor, e.g., ReLU-nets become some LDN for any given input, but the realized smooth predictor can be different for different inputs. For such non-smooth predictors, we introduce a new PAC-Bayes analysis which takes advantage of the smoothness of the realized predictors, e.g., LDN, for a given input. After careful de-randomization, we get a bound for the deterministic non-smooth predictor. We present empirical results to illustrate the efficacy of our bounds over changing training set size and randomness in labels.

View on arXiv
Comments on this paper