ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.09442
47
3

Parallel Sampling via Counting

18 August 2024
Nima Anari
Ruiquan Gao
Aviad Rubinstein
ArXivPDFHTML
Abstract

We show how to use parallelization to speed up sampling from an arbitrary distribution μ\muμ on a product space [q]n[q]^n[q]n, given oracle access to counting queries: PX∼μ[XS=σS]\mathbb{P}_{X\sim \mu}[X_S=\sigma_S]PX∼μ​[XS​=σS​] for any S⊆[n]S\subseteq [n]S⊆[n] and σS∈[q]S\sigma_S \in [q]^SσS​∈[q]S. Our algorithm takes O(n2/3⋅polylog⁡(n,q))O({n^{2/3}\cdot \operatorname{polylog}(n,q)})O(n2/3⋅polylog(n,q)) parallel time, to the best of our knowledge, the first sublinear in nnn runtime for arbitrary distributions. Our results have implications for sampling in autoregressive models. Our algorithm directly works with an equivalent oracle that answers conditional marginal queries PX∼μ[Xi=σi  ∣  XS=σS]\mathbb{P}_{X\sim \mu}[X_i=\sigma_i\;\vert\; X_S=\sigma_S]PX∼μ​[Xi​=σi​∣XS​=σS​], whose role is played by a trained neural network in autoregressive models. This suggests a roughly n1/3n^{1/3}n1/3-factor speedup is possible for sampling in any-order autoregressive models. We complement our positive result by showing a lower bound of Ω~(n1/3)\widetilde{\Omega}(n^{1/3})Ω(n1/3) for the runtime of any parallel sampling algorithm making at most poly⁡(n)\operatorname{poly}(n)poly(n) queries to the counting oracle, even for q=2q=2q=2.

View on arXiv
Comments on this paper