High-dimensional covariance estimation by minimizing -penalized log-determinant divergence

Given i.i.d. observations of a random vector , we study the problem of estimating both its covariance matrix , and its inverse covariance or concentration matrix {.} We estimate by minimizing an -penalized log-determinant Bregman divergence; in the multivariate Gaussian case, this approach corresponds to -penalized maximum likelihood, and the structure of is specified by the graph of an associated Gaussian Markov random field. We analyze the performance of this estimator under high-dimensional scaling, in which the number of nodes in the graph , the number of edges and the maximum node degree , are allowed to grow as a function of the sample size . In addition to the parameters , our analysis identifies other key quantities covariance matrix ; and (b) the operator norm of the sub-matrix , where indexes the graph edges, and ; and (c) a mutual incoherence or irrepresentability measure on the matrix and (d) the rate of decay on the probabilities , where is the sample covariance based on samples. Our first result establishes consistency of our estimate in the elementwise maximum-norm. This in turn allows us to derive convergence rates in Frobenius and spectral norms, with improvements upon existing results for graphs with maximum node degrees . In our second result, we show that with probability converging to one, the estimate correctly specifies the zero pattern of the concentration matrix .
View on arXiv