Estimation of Markov Chain via Rank-constrained Likelihood

Abstract
This paper studies the recovery and state compression of low-rank Markov chains from empirical trajectories. We propose a non-convex estimator based on rank-constrained likelihood maximization. Statistical upper bounds are provided for the Kullback-Leiber divergence and the risk between the estimator and the true transition matrix. The estimator reveals a compressed state space of the Markov chain. We also develop a novel DC (difference of convex function) programming algorithm to tackle the rank-constrained non-smooth optimization problem. Convergence results are established. Experiments with taxi trip data show that the estimator is able to identify the zoning of Manhattan city.
View on arXivComments on this paper