ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.00070
83
6
v1v2v3 (latest)

Quasi Markov Chain Monte Carlo Methods

29 June 2018
Tobias Schwedes
B. Calderhead
ArXiv (abs)PDFHTML
Abstract

Quasi-Monte Carlo (QMC) methods for estimating integrals are attractive since the resulting estimators converge at a faster rate than pseudo-random Monte Carlo. However, they can be difficult to set up on arbitrary posterior densities within the Bayesian framework, in particular for inverse problems. We introduce a general parallel Markov chain Monte Carlo (MCMC) framework, for which we prove a law of large numbers and a central limit theorem. We further extend this approach to the use of adaptive kernels and state conditions, under which ergodicity holds. As a further extension, an importance sampling estimator is derived, for which asymptotic unbiasedness is proven. We consider the use of completely uniformly distributed (CUD) numbers and non-reversible transitions within the above stated methods, which leads to a general parallel quasi-MCMC (QMCMC) methodology. We prove consistency of the resulting estimators and demonstrate numerically that this approach scales close to n−1n^{-1}n−1 as we increase parallelisation, instead of the usual n−1/2n^{-1/2}n−1/2 that is typical of standard MCMC algorithms. In practical statistical models we observe up to 2 orders of magnitude improvement compared with pseudo-random methods.

View on arXiv
Comments on this paper