Random Separating Hyperplane Theorem and Learning Polytopes
The Separating Hyperplane theorem is a fundamental result in Convex Geometry with myriad applications. Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes. asserts that if the distance between and a polytope with vertices and unit diameter in is at least , where is a fixed constant in , then a randomly chosen hyperplane separates and with probability at least and margin at least . An immediate consequence of our result is the first near optimal bound on the error increase in the reduction from a Separation oracle to an Optimization oracle over a polytope. RSH has algorithmic applications in learning polytopes. We consider a fundamental problem, denoted the ``Hausdorff problem'', of learning a unit diameter polytope within Hausdorff distance , given an optimization oracle for . Using RSH, we show that with polynomially many random queries to the optimization oracle, can be approximated within error . To our knowledge this is the first provable algorithm for the Hausdorff Problem. Building on this result, we show that if the vertices of are well-separated, then an optimization oracle can be used to generate a list of points, each within Hausdorff distance of , with the property that the list contains a point close to each vertex of . Further, we show how to prune this list to generate a (unique) approximation to each vertex of the polytope. We prove that in many latent variable settings, e.g., topic modeling, LDA, optimization oracles do exist provided we project to a suitable SVD subspace. Thus, our work yields the first efficient algorithm for finding approximations to the vertices of the latent polytope under the well-separatedness assumption.
View on arXiv