ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.02624
  4. Cited By
Exploiting Strong Convexity from Data with Primal-Dual First-Order
  Algorithms

Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms

7 March 2017
Jialei Wang
Lin Xiao
ArXivPDFHTML

Papers citing "Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms"

5 / 5 papers shown
Title
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach
Colin Dirren
Mattia Bianchi
Panagiotis D. Grontas
John Lygeros
Florian Dorfler
41
0
0
18 Oct 2024
Decentralized Stochastic Variance Reduced Extragradient Method
Decentralized Stochastic Variance Reduced Extragradient Method
Luo Luo
Haishan Ye
29
7
0
01 Feb 2022
Instrumental Variable Value Iteration for Causal Offline Reinforcement
  Learning
Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning
Luofeng Liao
Zuyue Fu
Zhuoran Yang
Yixin Wang
Mladen Kolar
Zhaoran Wang
OffRL
23
35
0
19 Feb 2021
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave
  Saddle Point Problems without Strong Convexity
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
S. Du
Wei Hu
68
120
0
05 Feb 2018
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1