22
2

Stochastic mirror descent for nonparametric adaptive importance sampling

Abstract

This paper addresses the problem of approximating an unknown probability distribution with density ff -- which can only be evaluated up to an unknown scaling factor -- with the help of a sequential algorithm that produces at each iteration n1n\geq 1 an estimated density qnq_n.The proposed method optimizes the Kullback-Leibler divergence using a mirror descent (MD) algorithm directly on the space of density functions, while a stochastic approximation technique helps to manage between algorithm complexity and variability. One of the key innovations of this work is the theoretical guarantee that is provided for an algorithm with a fixed MD learning rate η(0,1)\eta \in (0,1 ). The main result is that the sequence qnq_n converges almost surely to the target density ff uniformly on compact sets. Through numerical experiments, we show that fixing the learning rate η(0,1)\eta \in (0,1 ) significantly improves the algorithm's performance, particularly in the context of multi-modal target distributions where a small value of η\eta allows to increase the chance of finding all modes. Additionally, we propose a particle subsampling method to enhance computational efficiency and compare our method against other approaches through numerical experiments.

View on arXiv
Comments on this paper