ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1601.07460
169
25
v1v2v3v4 (latest)

Information-theoretic lower bounds on learning the structure of Bayesian networks

27 January 2016
Asish Ghoshal
Jean Honorio
ArXiv (abs)PDFHTML
Abstract

In this paper, we study the information theoretic limits of learning the structure of Bayesian networks from data. We show that for Bayesian networks on continuous as well as discrete random variables, there exists a parameterization of the Bayesian network such that, the minimum number of samples required to learn the "true" Bayesian network grows as O(m)\mathcal{O}(m)O(m), where mmm is the number of variables in the network. Further, for sparse Bayesian networks, where the number of parents of any variable in the network is restricted to be at most lll for l≪ml \ll ml≪m, the minimum number of samples required grows as O(llog⁡m)\mathcal{O}(l\log m)O(llogm). We discuss conditions under which these limits are achieved. For Bayesian networks over continuous variables, we obtain results for Gaussian regression and Gumbel Bayesian networks. While for the discrete variables, we obtain results for Noisy-OR, Conditional Probability Table (CPT) based Bayesian networks and Logistic regression networks. Finally, as a byproduct, we also obtain lower bounds on the sample complexity of feature selection in logistic regression and show that the bounds are sharp.

View on arXiv
Comments on this paper