On Learning Sparse Changes of Markov Networks

We study the problem of learning sparse structure changes between two Markov networks and . Rather than fitting two Markov networks separately to two sets of data and figuring out their differences, a recent work proposed to learn changes \emph{directly} via estimating the ratio between two Markov network models. Such a direct approach was demonstrated to perform excellently in experiments, although its theoretical properties remained unexplored. In this paper, we give sufficient conditions for \emph{successful change detection} with respect to the sample size , the dimension of data , and the number of changed edges . More specifically, when using an unbounded density ratio model we prove that the true sparse changes can be consistently identified for and , with an exponentially decaying upper-bound on learning error. Such sample complexity can be improved to when the boundedness of the density ratio model is assumed. Our theoretical guarantee can be applied to a wide range of discrete/continuous Markov networks.
View on arXiv