Channel simulation algorithms can efficiently encode random samples from a prescribed target distribution and find applications in machine learning-based lossy data compression. However, algorithms that encode exact samples usually have random runtime, limiting their applicability when a consistent encoding time is desirable. Thus, this paper considers approximate schemes with a fixed runtime instead. First, we strengthen a result of Agustsson and Theis and show that there is a class of pairs of target distribution and coding distribution , for which the runtime of any approximate scheme scales at least super-polynomially in . We then show, by contrast, that if we have access to an unnormalised Radon-Nikodym derivative and knowledge of , we can exploit global-bound, depth-limited A* coding to ensure and maintain optimal coding performance with a sample complexity of only .
View on arXiv