ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.01488
14
11

Minimax Optimization with Smooth Algorithmic Adversaries

2 June 2021
Tanner Fiez
Chi Jin
Praneeth Netrapalli
Lillian J. Ratliff
    AAML
ArXivPDFHTML
Abstract

This paper considers minimax optimization min⁡xmax⁡yf(x,y)\min_x \max_y f(x, y)minx​maxy​f(x,y) in the challenging setting where fff can be both nonconvex in xxx and nonconcave in yyy. Though such optimization problems arise in many machine learning paradigms including training generative adversarial networks (GANs) and adversarially robust models, many fundamental issues remain in theory, such as the absence of efficiently computable optimality notions, and cyclic or diverging behavior of existing algorithms. Our framework sprouts from the practical consideration that under a computational budget, the max-player can not fully maximize f(x,⋅)f(x,\cdot)f(x,⋅) since nonconcave maximization is NP-hard in general. So, we propose a new algorithm for the min-player to play against smooth algorithms deployed by the adversary (i.e., the max-player) instead of against full maximization. Our algorithm is guaranteed to make monotonic progress (thus having no limit cycles), and to find an appropriate "stationary point" in a polynomial number of iterations. Our framework covers practical settings where the smooth algorithms deployed by the adversary are multi-step stochastic gradient ascent, and its accelerated version. We further provide complementing experiments that confirm our theoretical findings and demonstrate the effectiveness of the proposed approach in practice.

View on arXiv
Comments on this paper