ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.14925
49
10
v1v2 (latest)

Does the ℓ1\ell_1ℓ1​-norm Learn a Sparse Graph under Laplacian Constrained Graphical Models?

26 June 2020
Jiaxi Ying
J. Cardoso
Daniel P. Palomar
ArXiv (abs)PDFHTML
Abstract

We consider the problem of learning a sparse graph under Laplacian constrained Gaussian graphical models. This problem can be formulated as a penalized maximum likelihood estimation of the precision matrix under Laplacian structural constraints. Like in the classical graphical lasso problem, recent works made use of the ℓ1\ell_1ℓ1​-norm regularization with the goal of promoting sparsity in Laplacian structural precision matrix estimation. However, we find that the widely used ℓ1\ell_1ℓ1​-norm is not effective in imposing a sparse solution in this problem. Through empirical evidence, we observe that the number of nonzero graph weights grows with the increase of the regularization parameter. From a theoretical perspective, we prove that a large regularization parameter will surprisingly lead to a fully connected graph. To address this issue, we propose a nonconvex estimation method by solving a sequence of weighted ℓ1\ell_1ℓ1​-norm penalized sub-problems and prove that the statistical error of the proposed estimator matches the minimax lower bound. To solve each sub-problem, we develop a projected gradient descent algorithm that enjoys a linear convergence rate. Numerical experiments involving synthetic and real-world data sets from the recent COVID-19 pandemic and financial stock markets demonstrate the effectiveness of the proposed method. An open source R\mathsf{R}R package containing the code for all the experiments is available at https://github.com/mirca/sparseGraph.

View on arXiv
Comments on this paper