ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.04803
49
4
v1v2v3v4v5v6v7v8v9 (latest)

Online Approximate Bayesian learning

9 July 2020
Mathieu Gerber
Randal Douc
ArXiv (abs)PDFHTML
Abstract

We introduce in this work a new approach for online approximate Bayesian learning. The main idea of the proposed method is to approximate the sequence (πt)t≥1(\pi_t)_{t\geq 1}(πt​)t≥1​ of posterior distributions by a sequence (π~t)t≥1(\tilde{\pi}_t)_{t\geq 1}(π~t​)t≥1​ which (i) can be estimated in an online fashion using sequential Monte Carlo methods and (ii) is shown to converge to the same distribution as the sequence (πt)t≥1(\pi_t)_{t\geq 1}(πt​)t≥1​, under weak assumptions on the statistical model at hand. In its simplest version, (π~t)t≥1(\tilde{\pi}_t)_{t\geq 1}(π~t​)t≥1​ is the sequence of filtering distributions associated to a particular state-space model, which can therefore be approximated using a standard particle filter algorithm. We illustrate on several challenging examples the benefits of this approach for approximate Bayesian parameter inference, and with one real data example we show that its online predictive performance can significantly outperform that of stochastic gradient descent and streaming variational Bayes.

View on arXiv
Comments on this paper