Mirror Bridges Between Probability Measures
- OOD
Resampling from a target measure whose density is unknown is a fundamental problem in mathematical statistics and machine learning. A setting that dominates the machine learning literature consists of learning a map from an easy-to-sample prior, such as the Gaussian distribution, to a target measure. Under this model, samples from the prior are pushed forward to generate a new sample on the target measure, which is often difficult to sample from directly. A related problem of particular interest is that of generating a new sample proximate to or otherwise conditioned on a given input sample. In this paper, we propose a new model called the mirror bridge to solve this problem of conditional resampling. Our key observation is that solving the Schrödinger bridge problem between a distribution and itself provides a natural way to produce new samples, giving in-distribution variations of an input data point. We demonstrate how to efficiently estimate the solution of this largely overlooked version of the Schrödinger bridge problem. We show that our proposed method leads to significant algorithmic simplifications over existing alternatives, in addition to providing control over in-distribution variation. Empirically, we demonstrate how these benefits can be leveraged to produce proximal samples in a number of application domains.
View on arXiv