357

Bayesian Regularization on Function Spaces via Q-Exponential Process

Neural Information Processing Systems (NeurIPS), 2022
Abstract

Regularization is one of the most important topics in optimization, statistics and machine learning. To get sparsity in estimating a parameter u\mbRdu\in\mbR^d, an q\ell_q penalty term, uq\Vert u\Vert_q, is usually added to the objective function. What is the probabilistic distribution corresponding to such q\ell_q penalty? What is the correct stochastic process corresponding to uq\Vert u\Vert_q when we model functions uLqu\in L^q? This is important for statistically modeling large dimensional objects, e.g. images, with penalty to preserve certainty properties, e.g. edges in the image. In this work, we generalize the qq-exponential distribution (with density proportional to) exp(\halfuq)\exp{(- \half|u|^q)} to a stochastic process named \emph{QQ-exponential (Q-EP) process} that corresponds to the LqL_q regularization of functions. The key step is to specify consistent multivariate qq-exponential distributions by choosing from a large family of elliptic contour distributions. The work is closely related to Besov process which is usually defined by the expanded series. Q-EP can be regarded as a definition of Besov process with explicit probabilistic formulation and direct control on the correlation length. From the Bayesian perspective, Q-EP provides a flexible prior on functions with sharper penalty (q<2q<2) than the commonly used Gaussian process (GP). We compare GP, Besov and Q-EP in modeling time series and reconstructing images and demonstrate the advantage of the proposed methodology.

View on arXiv
Comments on this paper