65
0

Riemannian Optimization on Relaxed Indicator Matrix Manifold

Abstract

The indicator matrix plays an important role in machine learning, but optimizing it is an NP-hard problem. We propose a new relaxation of the indicator matrix and prove that this relaxation forms a manifold, which we call the Relaxed Indicator Matrix Manifold (RIM manifold). Based on Riemannian geometry, we develop a Riemannian toolbox for optimization on the RIM manifold. Specifically, we provide several methods of Retraction, including a fast Retraction method to obtain geodesics. We point out that the RIM manifold is a generalization of the double stochastic manifold, and it is much faster than existing methods on the double stochastic manifold, which has a complexity of \( \mathcal{O}(n^3) \), while RIM manifold optimization is \( \mathcal{O}(n) \) and often yields better results. We conducted extensive experiments, including image denoising, with millions of variables to support our conclusion, and applied the RIM manifold to Ratio Cut, we provide a rigorous convergence proof and achieve clustering results that outperform the state-of-the-art methods. Our Code in \href{this https URL}{here}.

View on arXiv
@article{yuan2025_2503.20505,
  title={ Riemannian Optimization on Relaxed Indicator Matrix Manifold },
  author={ Jinghui Yuan and Fangyuan Xie and Feiping Nie and Xuelong Li },
  journal={arXiv preprint arXiv:2503.20505},
  year={ 2025 }
}
Comments on this paper