ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06028
68
2
v1v2v3v4 (latest)

The surprising secret identity of the semidefinite relaxation of K-means: manifold learning

19 June 2017
Mariano Tepper
Anirvan M. Sengupta
D. Chklovskii
ArXiv (abs)PDFHTML
Abstract

In recent years, semidefinite programs (SDP) have been the subject of interesting research in the field of clustering. In many cases, these convex programs deliver the same answers as non-convex alternatives and come with a guarantee of optimality. Unexpectedly, we find that a popular semidefinite relaxation of K-means (SDP-KM), learns manifolds present in the data, something not possible with the original K-means formulation. To build an intuitive understanding of its manifold learning capabilities, we develop a theoretical analysis of SDP-KM on idealized datasets. Additionally, we show that SDP-KM even segregates linearly non-separable manifolds. SDP-KM is convex and the globally optimal solution can be found by generic SDP solvers with polynomial time complexity. To overcome poor performance of these solvers on large datasets, we explore efficient algorithms based on the explicit Gramian representation of the problem. These features render SDP-KM a versatile and interesting tool for manifold learning while remaining amenable to theoretical analysis.

View on arXiv
Comments on this paper