ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.09150
  4. Cited By
Stochastic Variance Reduced Primal Dual Algorithms for Empirical
  Composition Optimization

Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization

22 July 2019
Adithya M. Devraj
Jianshu Chen
ArXivPDFHTML

Papers citing "Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization"

5 / 5 papers shown
Title
Doubly Robust Instance-Reweighted Adversarial Training
Doubly Robust Instance-Reweighted Adversarial Training
Daouda Sow
Sen-Fon Lin
Zhangyang Wang
Yitao Liang
AAML
OOD
35
2
0
01 Aug 2023
Stability and Generalization of Stochastic Compositional Gradient
  Descent Algorithms
Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms
Minghao Yang
Xiyuan Wei
Tianbao Yang
Yiming Ying
52
1
0
07 Jul 2023
Solving Stochastic Compositional Optimization is Nearly as Easy as
  Solving Stochastic Optimization
Solving Stochastic Compositional Optimization is Nearly as Easy as Solving Stochastic Optimization
Tianyi Chen
Yuejiao Sun
W. Yin
48
81
0
25 Aug 2020
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave
  Saddle Point Problems without Strong Convexity
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
S. Du
Wei Hu
68
120
0
05 Feb 2018
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1