ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.08093
49
24

Solving Non-Convex Non-Differentiable Min-Max Games using Proximal Gradient Method

18 March 2020
Babak Barazandeh
Meisam Razaviyayn
ArXiv (abs)PDFHTML
Abstract

Min-max saddle point games appear in a wide range of applications in machine leaning and signal processing. Despite their wide applicability, theoretical studies are mostly limited to the special convex-concave structure. While some recent works generalized these results to special smooth non-convex cases, our understanding of non-smooth scenarios is still limited. In this work, we study special form of non-smooth min-max games when the objective function is (strongly) convex with respect to one of the player's decision variable. We show that a simple multi-step proximal gradient descent-ascent algorithm converges to ϵ\epsilonϵ-first-order Nash equilibrium of the min-max game with the number of gradient evaluations being polynomial in 1/ϵ1/\epsilon1/ϵ. We will also show that our notion of stationarity is stronger than existing ones in the literature. Finally, we evaluate the performance of the proposed algorithm through adversarial attack on a LASSO estimator.

View on arXiv
Comments on this paper