Efficient Learning of Practical Markov Random Fields with Exact
Inference
We introduce a new parameter learning algorithm for a large class of Markov Random Fields (MRFs) with exact and efficient inference. Let the 1-neighbourhood of a parameterized clique be the union of the variables in the clique and in all the neighbour cliques. Then, the complexity of the inference step in the new learning algorithm, which we call LAP, is linear in the size of the MRF and exponential in the size of the 1-neighbourhood. In contrast, when using the junction tree algorithm for inference, the complexity is exponential in the tree-width of the MRF. Consequently, for a J by J square-lattice MRF, the complexity of an exact inference step with the junction tree algorithm is exponential in J, but it is only linear in J when using LAP. We prove that for individually parameterized cliques, the LAP and maximum likelihood estimates coincide. The LAP algorithm is natively parallel and hence ideal for massive-scale data modeling. The algorithm applies to many practical MRFs of great interest, including 2D and 3D lattices, chimera models, and skip-chain conditional random fields.
View on arXiv