202
24

Support Consistency of Direct Sparse-Change Learning in Markov Networks

Abstract

We study the problem of learning sparse structure changes between two Markov networks PP and QQ. Rather than fitting two Markov networks separately to two sets of data and figuring out their differences, a recent work proposed to learn changes \emph{directly} via estimating the ratio between two Markov network models. Such a direct approach was demonstrated to perform excellently in experiments, although its theoretical properties remained unexplored. In this paper, we give sufficient conditions for \emph{successful change detection} with respect to the sample size np,nqn_p, n_q, the dimension of data mm, and the number of changed edges dd. More specifically, we prove that the true sparse changes can be consistently identified for np=Ω(d2logm22)n_p = \Omega(d^2 \log \frac{m^2}{2}) and nq=Ω(np2d)n_q = \Omega(\frac{n_p^2}{d}), with an exponentially decaying upper-bound on learning error. Our theoretical guarantee can be applied to a wide range of discrete/continuous Markov networks.

View on arXiv
Comments on this paper