ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.14724
61
21
v1v2v3 (latest)

Learning Bayesian Networks Under Sparsity Constraints: A Parameterized Complexity Analysis

30 April 2020
Niels Grüttemeier
Christian Komusiewicz
ArXiv (abs)PDFHTML
Abstract

We study the problem of learning the structure of an optimal Bayesian network DDD when additional constraints are posed on the DAG DDD or on its moralized graph. More precisely, we consider the constraint that the moralized graph can be transformed to a graph from a sparse graph class Π\PiΠ by at most kkk vertex deletions. We show that for Π\PiΠ being the graphs with maximum degree 111, an optimal network can be computed in polynomial time when kkk is constant, extending previous work that gave an algorithm with such a running time for Π\PiΠ being the class of edgeless graphs [Korhonen & Parviainen, NIPS 2015]. We then show that further extensions or improvements are presumably impossible. For example, we show that when Π\PiΠ is the set of graphs with maximum degree 222 or when Π\PiΠ is the set of graphs in which each component has size at most three, then learning an optimal network is NP-hard even if k=0k=0k=0. Finally, we show that learning an optimal network with at most kkk edges in the moralized graph presumably has no f(k)⋅∣I∣O(1)f(k)\cdot |I|^{\mathcal{O}(1)}f(k)⋅∣I∣O(1)-time algorithm and that, in contrast, an optimal network with at most kkk arcs in the DAG DDD can be computed in 2O(k)⋅∣I∣O(1)2^{\mathcal{O}(k)}\cdot |I|^{\mathcal{O}(1)}2O(k)⋅∣I∣O(1) time where ∣I∣|I|∣I∣ is the total input size.

View on arXiv
Comments on this paper