ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.10642
30
0

Decentralized Learning of Tree-Structured Gaussian Graphical Models from Noisy Data

22 September 2021
Akram Hussain
ArXivPDFHTML
Abstract

This paper studies the decentralized learning of tree-structured Gaussian graphical models (GGMs) from noisy data. In decentralized learning, data set is distributed across different machines (sensors), and GGMs are widely used to model complex networks such as gene regulatory networks and social networks. The proposed decentralized learning uses the Chow-Liu algorithm for estimating the tree-structured GGM. In previous works, upper bounds on the probability of incorrect tree structure recovery were given mostly without any practical noise for simplification. While this paper investigates the effects of three common types of noisy channels: Gaussian, Erasure, and binary symmetric channel. For Gaussian channel case, to satisfy the failure probability upper bound δ>0\delta > 0δ>0 in recovering a ddd-node tree structure, our proposed theorem requires only O(log⁡(dδ))\mathcal{O}(\log(\frac{d}{\delta}))O(log(δd​)) samples for the smallest sample size (nnn) comparing to the previous literature \cite{Nikolakakis} with O(log⁡4(dδ))\mathcal{O}(\log^4(\frac{d}{\delta}))O(log4(δd​)) samples by using the positive correlation coefficient assumption that is used in some important works in the literature. Moreover, the approximately bounded Gaussian random variable assumption does not appear in \cite{Nikolakakis}. Given some knowledge about the tree structure, the proposed Algorithmic Bound will achieve obviously better performance with small sample size (e.g., <2000< 2000<2000) comparing with formulaic bounds. Finally, we validate our theoretical results by performing simulations on synthetic data sets.

View on arXiv
Comments on this paper