211
24

Support Consistency of Direct Sparse-Change Learning in Markov Networks

Abstract

We study the problem of learning sparse structure changes between two Markov networks PP and QQ. Rather than fitting two Markov networks separately to two sets of data and figuring out their differences, a recent work proposed to learn changes \emph{directly} via estimating the ratio between two Markov network models. In this paper, we give sufficient conditions for \emph{successful change detection} with respect to the sample size np,nqn_p, n_q, the dimension of data mm, and the number of changed edges dd. When using an unbounded density ratio model we prove that the true sparse changes can be consistently identified for np=Ω(d2logm2+m2)n_p = \Omega(d^2 \log \frac{m^2+m}{2}) and nq=Ω(np2)n_q = \Omega({n_p^2}), with an exponentially decaying upper-bound on learning error. Such sample complexity can be improved to min(np,nq)=Ω(d2logm2+m2)\min(n_p, n_q) = \Omega(d^2 \log \frac{m^2+m}{2}) when the boundedness of the density ratio model is assumed. Our theoretical guarantee can be applied to a wide range of discrete/continuous Markov networks.

View on arXiv
Comments on this paper