Efficiency of adaptive importance sampling

The \textit{sampling policy} of stage , formally expressed as a probability density function , stands for the distribution of the sample generated at . From the past samples, some information depending on some \textit{objective} is derived leading eventually to update the sampling policy to . This generic approach characterizes \textit{adaptive importance sampling} (AIS) schemes. Each stage is formed with two steps : (i) to explore the space with points according to and (ii) to exploit the current amount of information to update the sampling policy. The very fundamental question raised in the paper concerns the behavior of empirical sums based on AIS. Without making any assumption on the \textit{allocation policy} , the theory developed involves no restriction on the split of computational resources between the explore (i) and the exploit (ii) step. It is shown that AIS is efficient : the asymptotic behavior of AIS is the same as some "oracle" strategy that knows the optimal sampling policy from the beginning. From a practical perspective, weighted AIS is introduced, a new method that allows to forget poor samples from early stages.
View on arXiv