ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.03613
89
10

Solving a Class of Non-Convex Minimax Optimization in Federated Learning

5 October 2023
Xidong Wu
Jianhui Sun
Zhengmian Hu
Aidong Zhang
Heng-Chiao Huang
    FedML
ArXivPDFHTML
Abstract

The minimax problems arise throughout machine learning applications, ranging from adversarial training and policy evaluation in reinforcement learning to AUROC maximization. To address the large-scale data challenges across multiple clients with communication-efficient distributed training, federated learning (FL) is gaining popularity. Many optimization algorithms for minimax problems have been developed in the centralized setting (\emph{i.e.} single-machine). Nonetheless, the algorithm for minimax problems under FL is still underexplored. In this paper, we study a class of federated nonconvex minimax optimization problems. We propose FL algorithms (FedSGDA+ and FedSGDA-M) and reduce existing complexity results for the most common minimax problems. For nonconvex-concave problems, we propose FedSGDA+ and reduce the communication complexity to O(ε−6)O(\varepsilon^{-6})O(ε−6). Under nonconvex-strongly-concave and nonconvex-PL minimax settings, we prove that FedSGDA-M has the best-known sample complexity of O(κ3N−1ε−3)O(\kappa^{3} N^{-1}\varepsilon^{-3})O(κ3N−1ε−3) and the best-known communication complexity of O(κ2ε−2)O(\kappa^{2}\varepsilon^{-2})O(κ2ε−2). FedSGDA-M is the first algorithm to match the best sample complexity O(ε−3)O(\varepsilon^{-3})O(ε−3) achieved by the single-machine method under the nonconvex-strongly-concave setting. Extensive experimental results on fair classification and AUROC maximization show the efficiency of our algorithms.

View on arXiv
Comments on this paper