ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.01200
40
3

Large-Scale Non-convex Stochastic Constrained Distributionally Robust Optimization

1 April 2024
Qi Zhang
Yi Zhou
Ashley Prater-Bennette
Lixin Shen
Shaofeng Zou
ArXivPDFHTML
Abstract

Distributionally robust optimization (DRO) is a powerful framework for training robust models against data distribution shifts. This paper focuses on constrained DRO, which has an explicit characterization of the robustness level. Existing studies on constrained DRO mostly focus on convex loss function, and exclude the practical and challenging case with non-convex loss function, e.g., neural network. This paper develops a stochastic algorithm and its performance analysis for non-convex constrained DRO. The computational complexity of our stochastic algorithm at each iteration is independent of the overall dataset size, and thus is suitable for large-scale applications. We focus on the general Cressie-Read family divergence defined uncertainty set which includes χ2\chi^2χ2-divergences as a special case. We prove that our algorithm finds an ϵ\epsilonϵ-stationary point with a computational complexity of O(ϵ−3k∗−5)\mathcal O(\epsilon^{-3k_*-5})O(ϵ−3k∗​−5), where k∗k_*k∗​ is the parameter of the Cressie-Read divergence. The numerical results indicate that our method outperforms existing methods.} Our method also applies to the smoothed conditional value at risk (CVaR) DRO.

View on arXiv
Comments on this paper