ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.04363
19
3

Some Notes on the Sample Complexity of Approximate Channel Simulation

7 May 2024
Gergely Flamich
Lennie Wells
ArXivPDFHTML
Abstract

Channel simulation algorithms can efficiently encode random samples from a prescribed target distribution QQQ and find applications in machine learning-based lossy data compression. However, algorithms that encode exact samples usually have random runtime, limiting their applicability when a consistent encoding time is desirable. Thus, this paper considers approximate schemes with a fixed runtime instead. First, we strengthen a result of Agustsson and Theis and show that there is a class of pairs of target distribution QQQ and coding distribution PPP, for which the runtime of any approximate scheme scales at least super-polynomially in D∞[Q∥P]D_\infty[Q \Vert P]D∞​[Q∥P]. We then show, by contrast, that if we have access to an unnormalised Radon-Nikodym derivative r∝dQ/dPr \propto dQ/dPr∝dQ/dP and knowledge of DKL[Q∥P]D_{KL}[Q \Vert P]DKL​[Q∥P], we can exploit global-bound, depth-limited A* coding to ensure TV[Q∥P]≤ϵ\mathrm{TV}[Q \Vert P] \leq \epsilonTV[Q∥P]≤ϵ and maintain optimal coding performance with a sample complexity of only exp⁡2((DKL[Q∥P]+o(1))/ϵ)\exp_2\big((D_{KL}[Q \Vert P] + o(1)) \big/ \epsilon\big)exp2​((DKL​[Q∥P]+o(1))/ϵ).

View on arXiv
Comments on this paper