ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1502.07039
91
3
v1v2 (latest)

Markov Interacting Importance Samplers

25 February 2015
Eduardo F. Mendes
Marcel Scharth
Robert Kohn
    VLM
ArXiv (abs)PDFHTML
Abstract

We introduce a new Markov chain Monte Carlo (MCMC) sampler that iterates by constructing conditional importance sampling (IS) approximations to target distributions. We present Markov interacting importance samplers (MIIS) in general form, followed by examples to demonstrate their flexibility. A leading application is when the exact Gibbs sampler is not available due to infeasibility of direct simulation from the conditional distributions. The MIIS algorithm uses conditional IS approximations to jointly sample the current state of the Markov Chain and estimate conditional expectations (possibly by incorporating a full range of variance reduction techniques). We compute Rao-Blackwellized estimates based on the conditional expectations to construct control variates for estimating expectations under the target distribution. The control variates are particularly efficient when there are substantial correlations in the target distribution, a challenging setting for MCMC. We also introduce the MIIS random walk algorithm, designed to accelerate convergence and improve upon the computational efficiency of standard random walk samplers. Simulated and empirical illustrations for Bayesian analysis of the mixed Logit model and Markov modulated Poisson processes show that the method significantly reduces the variance of Monte Carlo estimates compared to standard MCMC approaches, at equivalent implementation and computational effort.

View on arXiv
Comments on this paper