Some Inapproximability Results of MAP Inference and Exponentiated Determinantal Point Processes

We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter . We present several complexity-theoretic hardness results that explain the difficulty in approximating MAP inference and the normalizing constant for E-DPPs. We first prove that unconstrained MAP inference for an matrix is -hard to approximate within a factor of , where . This result improves upon the best-known inapproximability factor of , and rules out the existence of any polynomial-factor approximation algorithm assuming . We then show that log-determinant maximization is -hard to approximate within a factor of for the unconstrained case and within a factor of for the size-constrained monotone case. In particular, log-determinant maximization does not admit a polynomial-time approximation scheme unless . As a corollary of the first result, we demonstrate that the normalizing constant for E-DPPs of any (fixed) constant exponent is -hard to approximate within a factor of , which is in contrast to the case of admitting a fully polynomial-time randomized approximation scheme.
View on arXiv