Convex optimization over a probability simplex

We propose a new iteration scheme, the Cauchy-Simplex, to optimize convex problems over the probability simplex . Specifically, we map the simplex to the positive quadrant of a unit sphere, envisage gradient descent in latent variables, and map the result back in a way that only depends on the simplex variable. Moreover, proving rigorous convergence results in this formulation leads inherently to tools from information theory (e.g., cross-entropy and KL divergence). Each iteration of the Cauchy-Simplex consists of simple operations, making it well-suited for high-dimensional problems. In continuous time, we prove that for differentiable real-valued convex functions, where is the number of time steps and is the optimal solution. Numerical experiments of projection onto convex hulls show faster convergence than similar algorithms. Finally, we apply our algorithm to online learning problems and prove the convergence of the average regret for (1) Prediction with expert advice and (2) Universal Portfolios.
View on arXiv@article{chok2025_2305.09046, title={ Convex optimization over a probability simplex }, author={ James Chok and Geoffrey M. Vasil }, journal={arXiv preprint arXiv:2305.09046}, year={ 2025 } }