75

A quasi-polynomial time algorithm for Multi-Dimensional Scaling via LP hierarchies

Annual Conference Computational Learning Theory (COLT), 2023
Main:33 Pages
4 Figures
Bibliography:5 Pages
Abstract

Multi-dimensional Scaling (MDS) is a family of methods for embedding an nn-point metric into low-dimensional Euclidean space. We study the Kamada-Kawai formulation of MDS: given a set of non-negative dissimilarities {di,j}i,j[n]\{d_{i,j}\}_{i , j \in [n]} over nn points, the goal is to find an embedding {x1,,xn}Rk\{x_1,\dots,x_n\} \in \mathbb{R}^k that minimizes \[\text{OPT} = \min_{x} \mathbb{E}_{i,j \in [n]} \left[ \left(1-\frac{\|x_i - x_j\|}{d_{i,j}}\right)^2 \right] \] Kamada-Kawai provides a more relaxed measure of the quality of a low-dimensional metric embedding than the traditional bi-Lipschitz-ness measure studied in theoretical computer science; this is advantageous because strong hardness-of-approximation results are known for the latter, Kamada-Kawai admits nontrivial approximation algorithms. Despite its popularity, our theoretical understanding of MDS is limited. Recently, Demaine, Hesterberg, Koehler, Lynch, and Urschel (arXiv:2109.11505) gave the first approximation algorithm with provable guarantees for Kamada-Kawai in the constant-kk regime, with cost OPT+ϵ\text{OPT} +\epsilon in n22poly(Δ/ϵ)n^2 2^{\text{poly}(\Delta/\epsilon)} time, where Δ\Delta is the aspect ratio of the input. In this work, we give the first approximation algorithm for MDS with quasi-polynomial dependency on Δ\Delta: we achieve a solution with cost O~(logΔ)OPTΩ(1)+ϵ\tilde{O}(\log \Delta)\text{OPT}^{\Omega(1)}+\epsilon in time nO(1)2poly(log(Δ)/ϵ)n^{O(1)}2^{\text{poly}(\log(\Delta)/\epsilon)}. Our approach is based on a novel analysis of a conditioning-based rounding scheme for the Sherali-Adams LP Hierarchy. Crucially, our analysis exploits the geometry of low-dimensional Euclidean space, allowing us to avoid an exponential dependence on the aspect ratio. We believe our geometry-aware treatment of the Sherali-Adams Hierarchy is an important step towards developing general-purpose techniques for efficient metric optimization algorithms.

View on arXiv
Comments on this paper