ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.01437
40
188

The sample size required in importance sampling

4 November 2015
S. Chatterjee
P. Diaconis
ArXivPDFHTML
Abstract

The goal of importance sampling is to estimate the expected value of a given function with respect to a probability measure ν\nuν using a random sample of size nnn drawn from a different probability measure μ\muμ. If the two measures μ\muμ and ν\nuν are nearly singular with respect to each other, which is often the case in practice, the sample size required for accurate estimation is large. In this article it is shown that in a fairly general setting, a sample of size approximately exp⁡(D(ν∣∣μ))\exp(D(\nu||\mu))exp(D(ν∣∣μ)) is necessary and sufficient for accurate estimation by importance sampling, where D(ν∣∣μ)D(\nu||\mu)D(ν∣∣μ) is the Kullback-Leibler divergence of μ\muμ from ν\nuν. In particular, the required sample size exhibits a kind of cut-off in the logarithmic scale. The theory is applied to obtain a general formula for the sample size required in importance sampling for one-parameter exponential families (Gibbs measures).

View on arXiv
Comments on this paper