ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.02765
63
2
v1v2 (latest)

On the Sample Complexity of Learning Sum-Product Networks

5 December 2019
Ishaq Aden-Ali
H. Ashtiani
    TPM
ArXiv (abs)PDFHTML
Abstract

Sum-Product Networks (SPNs) can be regarded as a form of deep graphical models that compactly represent deeply factored and mixed distributions. An SPN is a rooted directed acyclic graph (DAG) consisting of a set of leaves (corresponding to base distributions), a set of sum nodes (which represent mixtures of their children distributions) and a set of product nodes (representing the products of its children distributions). In this work, we initiate the study of the sample complexity of PAC-learning the set of distributions that correspond to SPNs. We show that the sample complexity of learning tree structured SPNs with the usual type of leaves (i.e., Gaussian or discrete) grows at most linearly (up to logarithmic factors) with the number of parameters of the SPN. More specifically, we show that the class of distributions that corresponds to tree structured Gaussian SPNs with kkk mixing weights and eee (ddd-dimensional Gaussian) leaves can be learned within Total Variation error ϵ\epsilonϵ using at most O~(ed2+kϵ2)\widetilde{O}(\frac{ed^2+k}{\epsilon^2})O(ϵ2ed2+k​) samples. A similar result holds for tree structured SPNs with discrete leaves. We obtain the upper bounds based on the recently proposed notion of distribution compression schemes. More specifically, we show that if a (base) class of distributions F\mathcal{F}F admits an "efficient" compression, then the class of tree structured SPNs with leaves from F\mathcal{F}F also admits an efficient compression.

View on arXiv
Comments on this paper