ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1401.4988
46
3
v1v2 (latest)

Marginal Pseudo-Likelihood Inference for Markov Networks

20 January 2014
J. Pensar
Henrik J. Nyman
Juha Niiranen
ArXiv (abs)PDFHTML
Abstract

Since its introduction in the 1970's, pseudo-likelihood has become a well-established inference tool for random network models. More recently, there has been a revival of interest towards the pseudo-likelihood based approach, motivated by several 'large p, small n' type applications. Under such circumstances some form of regularization is needed to obtain plausible inferences. The currently available methods typically necessitate the use of a tuning parameter to adapt the level of regularization for a particular dataset, which can be optimized for example by cross-validation. Here we introduce a Bayesian version of pseudo-likelihood inference for Markov networks, which enables an automatic regularization through marginalization over the nuisance parameters in the model. We prove consistency of the resulting estimator for network structure and introduce an efficient algorithm for learning structures via harmonization of candidate sets of Markov blankets. The marginal pseudo-likelihood method is shown to perform favorably against recent popular inference methods for Markov networks in terms of accuracy, while being at a comparable level in terms of computational complexity.

View on arXiv
Comments on this paper