ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.24016
52
0

Bayesian Predictive Coding

31 March 2025
Alexander Tschantz
Magnus T. Koudahl
Hampus Linander
Lancelot Da Costa
Conor Heins
Jeff Beck
Christopher L. Buckley
    BDL
ArXivPDFHTML
Abstract

Predictive coding (PC) is an influential theory of information processing in the brain, providing a biologically plausible alternative to backpropagation. It is motivated in terms of Bayesian inference, as hidden states and parameters are optimised via gradient descent on variational free energy. However, implementations of PC rely on maximum \textit{a posteriori} (MAP) estimates of hidden states and maximum likelihood (ML) estimates of parameters, limiting their ability to quantify epistemic uncertainty. In this work, we investigate a Bayesian extension to PC that estimates a posterior distribution over network parameters. This approach, termed Bayesian Predictive coding (BPC), preserves the locality of PC and results in closed-form Hebbian weight updates. Compared to PC, our BPC algorithm converges in fewer epochs in the full-batch setting and remains competitive in the mini-batch setting. Additionally, we demonstrate that BPC offers uncertainty quantification comparable to existing methods in Bayesian deep learning, while also improving convergence properties. Together, these results suggest that BPC provides a biologically plausible method for Bayesian learning in the brain, as well as an attractive approach to uncertainty quantification in deep learning.

View on arXiv
@article{tschantz2025_2503.24016,
  title={ Bayesian Predictive Coding },
  author={ Alexander Tschantz and Magnus Koudahl and Hampus Linander and Lancelot Da Costa and Conor Heins and Jeff Beck and Christopher Buckley },
  journal={arXiv preprint arXiv:2503.24016},
  year={ 2025 }
}
Comments on this paper