ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.04658
15
7

The Sample Complexity of Approximate Rejection Sampling with Applications to Smoothed Online Learning

9 February 2023
Adam Block
Yury Polyanskiy
ArXivPDFHTML
Abstract

Suppose we are given access to nnn independent samples from distribution μ\muμ and we wish to output one of them with the goal of making the output distributed as close as possible to a target distribution ν\nuν. In this work we show that the optimal total variation distance as a function of nnn is given by Θ~(Df′(n))\tilde\Theta(\frac{D}{f'(n)})Θ~(f′(n)D​) over the class of all pairs ν,μ\nu,\muν,μ with a bounded fff-divergence Df(ν∥μ)≤DD_f(\nu\|\mu)\leq DDf​(ν∥μ)≤D. Previously, this question was studied only for the case when the Radon-Nikodym derivative of ν\nuν with respect to μ\muμ is uniformly bounded. We then consider an application in the seemingly very different field of smoothed online learning, where we show that recent results on the minimax regret and the regret of oracle-efficient algorithms still hold even under relaxed constraints on the adversary (to have bounded fff-divergence, as opposed to bounded Radon-Nikodym derivative). Finally, we also study efficacy of importance sampling for mean estimates uniform over a function class and compare importance sampling with rejection sampling.

View on arXiv
Comments on this paper