71
v1v2v3 (latest)

Metric Embeddings Beyond Bi-Lipschitz Distortion via Sherali-Adams

Annual Conference Computational Learning Theory (COLT), 2023
Main:33 Pages
4 Figures
Bibliography:5 Pages
Abstract

Metric embeddings are a widely used method in algorithm design, where generally a ``complex'' metric is embedded into a simpler, lower-dimensional one. Historically, the theoretical computer science community has focused on bi-Lipschitz embeddings, which guarantee that every pairwise distance is approximately preserved. In contrast, alternative embedding objectives that are commonly used in practice avoid bi-Lipschitz distortion; yet these approaches have received comparatively less study in theory. In this paper, we focus on Multi-dimensional Scaling (MDS), where we are given a set of non-negative dissimilarities {di,j}i,j[n]\{d_{i,j}\}_{i,j\in [n]} over nn points, and the goal is to find an embedding {x1,,xn}Rk\{x_1,\dots,x_n\} \subset R^k that minimizes \textrm{OPT}=\min_{x}\mathbb{E}_{i,j\in [n]}\left(1-\frac{\|x_i - x_j\|}{d_{i,j}}\right)^2.Despite its popularity, our theoretical understanding of MDS is extremely limited. Recently, Demaine et. al. (arXiv:2109.11505) gave the first approximation algorithm with provable guarantees for this objective, which achieves an embedding in constant dimensional Euclidean space with cost OPT+ϵ\textrm{OPT} +\epsilon in n22poly(Δ/ϵ)n^2\cdot 2^{\textrm{poly}(\Delta/\epsilon)} time, where Δ\Delta is the aspect ratio of the input dissimilarities. For metrics that admit low-cost embeddings, Δ\Delta scales polynomially in nn. In this work, we give the first approximation algorithm for MDS with quasi-polynomial dependency on Δ\Delta: for constant dimensional Euclidean space, we achieve a solution with cost O(logΔ)OPTΩ(1)+ϵO(\log \Delta)\cdot \textrm{OPT}^{\Omega(1)}+\epsilon in time nO(1)2poly((log(Δ)/ϵ))n^{O(1)} \cdot 2^{\text{poly}((\log(\Delta)/\epsilon))}. Our algorithms are based on a novel geometry-aware analysis of a conditional rounding of the Sherali-Adams LP Hierarchy, allowing us to avoid exponential dependency on the aspect ratio, which would typically result from this rounding.

View on arXiv
Comments on this paper