We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices, as a natural, promising generalization of DPPs. We study the computational complexity of computing its normalizing constant, which is among the most essential probabilistic inference tasks. Our complexity-theoretic results (almost) rule out the existence of efficient algorithms for this task unless the input matrices are forced to have favorable structures. In particular, we prove the following: (1) Computing exactly for every (fixed) positive even integer is UP-hard and ModP-hard, which gives a negative answer to an open question posed by Kulesza and Taskar. (2) is NP-hard to approximate within a factor of or for any , where is the input size and is the order of the input matrix. This result is stronger than the #P-hardness for the case of two matrices derived by Gillenwater. (3) There exists a -time algorithm for computing , where is the maximum rank of and or the treewidth of the graph formed by nonzero entries of and . Such parameterized algorithms are said to be fixed-parameter tractable. These results can be extended to the fixed-size case. Further, we present two applications of fixed-parameter tractable algorithms given a matrix of treewidth : (4) We can compute a -approximation to for any fractional number in time. (5) We can find a -approximation to unconstrained MAP inference in time.
View on arXiv