ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1203.3887
127
50

Learning loopy graphical models with latent variables: Efficient methods and guarantees

17 March 2012
Anima Anandkumar
R. Valluvan
ArXivPDFHTML
Abstract

The problem of structure estimation in graphical models with latent variables is considered. We characterize conditions for tractable graph estimation and develop efficient methods with provable guarantees. We consider models where the underlying Markov graph is locally tree-like, and the model is in the regime of correlation decay. For the special case of the Ising model, the number of samples nnn required for structural consistency of our method scales as n=Ω(θmin⁡−δη(η+1)−2log⁡p)n=\Omega(\theta_{\min}^{-\delta\eta(\eta+1)-2}\log p)n=Ω(θmin−δη(η+1)−2​logp), where p is the number of variables, θmin⁡\theta_{\min}θmin​ is the minimum edge potential, δ\deltaδ is the depth (i.e., distance from a hidden node to the nearest observed nodes), and η\etaη is a parameter which depends on the bounds on node and edge potentials in the Ising model. Necessary conditions for structural consistency under any algorithm are derived and our method nearly matches the lower bound on sample requirements. Further, the proposed method is practical to implement and provides flexibility to control the number of latent variables and the cycle lengths in the output graph.

View on arXiv
Comments on this paper