ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.03529
  4. Cited By
On the Iteration Complexity of Oblivious First-Order Optimization
  Algorithms

On the Iteration Complexity of Oblivious First-Order Optimization Algorithms

11 May 2016
Yossi Arjevani
Ohad Shamir
ArXivPDFHTML

Papers citing "On the Iteration Complexity of Oblivious First-Order Optimization Algorithms"

9 / 9 papers shown
Title
Complexity Lower Bounds for Nonconvex-Strongly-Concave Min-Max
  Optimization
Complexity Lower Bounds for Nonconvex-Strongly-Concave Min-Max Optimization
Haochuan Li
Yi Tian
Jingzhao Zhang
Ali Jadbabaie
26
40
0
18 Apr 2021
Potential Function-based Framework for Making the Gradients Small in
  Convex and Min-Max Optimization
Potential Function-based Framework for Making the Gradients Small in Convex and Min-Max Optimization
Jelena Diakonikolas
Puqian Wang
31
13
0
28 Jan 2021
Why Are Convolutional Nets More Sample-Efficient than Fully-Connected
  Nets?
Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?
Zhiyuan Li
Yi Zhang
Sanjeev Arora
BDL
MLT
16
39
0
16 Oct 2020
Second-Order Information in Non-Convex Stochastic Optimization: Power
  and Limitations
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
90
53
0
24 Jun 2020
Complexity Guarantees for Polyak Steps with Momentum
Complexity Guarantees for Polyak Steps with Momentum
Mathieu Barré
Adrien B. Taylor
Alexandre d’Aspremont
22
26
0
03 Feb 2020
Accelerating Smooth Games by Manipulating Spectral Shapes
Accelerating Smooth Games by Manipulating Spectral Shapes
Waïss Azizian
Damien Scieur
Ioannis Mitliagkas
Simon Lacoste-Julien
Gauthier Gidel
35
48
0
02 Jan 2020
A Universally Optimal Multistage Accelerated Stochastic Gradient Method
A Universally Optimal Multistage Accelerated Stochastic Gradient Method
N. Aybat
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
ODL
24
57
0
23 Jan 2019
Optimal algorithms for smooth and strongly convex distributed
  optimization in networks
Optimal algorithms for smooth and strongly convex distributed optimization in networks
Kevin Scaman
Francis R. Bach
Sébastien Bubeck
Y. Lee
Laurent Massoulié
21
325
0
28 Feb 2017
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Yossi Arjevani
Ohad Shamir
24
24
0
30 Jun 2016
1