ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.02056
11
63

Variance Reduction for Matrix Games

3 July 2019
Y. Carmon
Yujia Jin
Aaron Sidford
Kevin Tian
ArXivPDFHTML
Abstract

We present a randomized primal-dual algorithm that solves the problem min⁡xmax⁡yy⊤Ax\min_{x} \max_{y} y^\top A xminx​maxy​y⊤Ax to additive error ϵ\epsilonϵ in time nnz(A)+nnz(A)n/ϵ\mathrm{nnz}(A) + \sqrt{\mathrm{nnz}(A)n}/\epsilonnnz(A)+nnz(A)n​/ϵ, for matrix AAA with larger dimension nnn and nnz(A)\mathrm{nnz}(A)nnz(A) nonzero entries. This improves the best known exact gradient methods by a factor of nnz(A)/n\sqrt{\mathrm{nnz}(A)/n}nnz(A)/n​ and is faster than fully stochastic gradient methods in the accurate and/or sparse regime ϵ≤n/nnz(A)\epsilon \le \sqrt{n/\mathrm{nnz}(A)}ϵ≤n/nnz(A)​. Our results hold for x,yx,yx,y in the simplex (matrix games, linear programming) and for xxx in an ℓ2\ell_2ℓ2​ ball and yyy in the simplex (perceptron / SVM, minimum enclosing ball). Our algorithm combines Nemirovski's "conceptual prox-method" and a novel reduced-variance gradient estimator based on "sampling from the difference" between the current iterate and a reference point.

View on arXiv
Comments on this paper