17
4

Some Inapproximability Results of MAP Inference and Exponentiated Determinantal Point Processes

Abstract

We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter pp. We present several complexity-theoretic hardness results that explain the difficulty in approximating MAP inference and the normalizing constant for E-DPPs. We first prove that unconstrained MAP inference for an n×nn \times n matrix is NP\textsf{NP}-hard to approximate within a factor of 2βn2^{\beta n}, where β=101013\beta = 10^{-10^{13}} . This result improves upon the best-known inapproximability factor of (98ϵ)(\frac{9}{8}-\epsilon), and rules out the existence of any polynomial-factor approximation algorithm assuming PNP\textsf{P} \neq \textsf{NP}. We then show that log-determinant maximization is NP\textsf{NP}-hard to approximate within a factor of 54\frac{5}{4} for the unconstrained case and within a factor of 1+1010131+10^{-10^{13}} for the size-constrained monotone case. In particular, log-determinant maximization does not admit a polynomial-time approximation scheme unless P=NP\textsf{P} = \textsf{NP}. As a corollary of the first result, we demonstrate that the normalizing constant for E-DPPs of any (fixed) constant exponent pβ1=101013p \geq \beta^{-1} = 10^{10^{13}} is NP\textsf{NP}-hard to approximate within a factor of 2βpn2^{\beta pn}, which is in contrast to the case of p1p \leq 1 admitting a fully polynomial-time randomized approximation scheme.

View on arXiv
Comments on this paper