6
7

Enhanced Adaptive Gradient Algorithms for Nonconvex-PL Minimax Optimization

Abstract

Minimax optimization recently is widely applied in many machine learning tasks such as generative adversarial networks, robust learning and reinforcement learning. In the paper, we study a class of nonconvex-nonconcave minimax optimization with nonsmooth regularization, where the objective function is possibly nonconvex on primal variable xx, and it is nonconcave and satisfies the Polyak-Lojasiewicz (PL) condition on dual variable yy. Moreover, we propose a class of enhanced momentum-based gradient descent ascent methods (i.e., MSGDA and AdaMSGDA) to solve these stochastic nonconvex-PL minimax problems. In particular, our AdaMSGDA algorithm can use various adaptive learning rates in updating the variables xx and yy without relying on any specifical types. Theoretically, we prove that our methods have the best known sample complexity of O~(ϵ3)\tilde{O}(\epsilon^{-3}) only requiring one sample at each loop in finding an ϵ\epsilon-stationary solution. Some numerical experiments on PL-game and Wasserstein-GAN demonstrate the efficiency of our proposed methods.

View on arXiv
@article{huang2025_2303.03984,
  title={ Enhanced Adaptive Gradient Algorithms for Nonconvex-PL Minimax Optimization },
  author={ Feihu Huang and Chunyu Xuan and Xinrui Wang and Siqi Zhang and Songcan Chen },
  journal={arXiv preprint arXiv:2303.03984},
  year={ 2025 }
}
Comments on this paper