ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.09386
  4. Cited By
On the Randomized Complexity of Minimizing a Convex Quadratic Function
v1v2v3v4v5v6v7 (latest)

On the Randomized Complexity of Minimizing a Convex Quadratic Function

24 July 2018
Max Simchowitz
ArXiv (abs)PDFHTML

Papers citing "On the Randomized Complexity of Minimizing a Convex Quadratic Function"

6 / 6 papers shown
Title
Efficient Convex Optimization Requires Superlinear Memory
Efficient Convex Optimization Requires Superlinear Memory
A. Marsden
Vatsal Sharan
Aaron Sidford
Gregory Valiant
76
15
0
29 Mar 2022
The Min-Max Complexity of Distributed Stochastic Convex Optimization
  with Intermittent Communication
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication
Blake E. Woodworth
Brian Bullins
Ohad Shamir
Nathan Srebro
64
49
0
02 Feb 2021
Second-Order Information in Non-Convex Stochastic Optimization: Power
  and Limitations
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
135
53
0
24 Jun 2020
Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex
  Functions?
Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex Functions?
Ohad Shamir
49
17
0
27 Feb 2020
Is Local SGD Better than Minibatch SGD?
Is Local SGD Better than Minibatch SGD?
Blake E. Woodworth
Kumar Kshitij Patel
Sebastian U. Stich
Zhen Dai
Brian Bullins
H. B. McMahan
Ohad Shamir
Nathan Srebro
FedML
87
254
0
18 Feb 2020
The gradient complexity of linear regression
The gradient complexity of linear regression
M. Braverman
Elad Hazan
Max Simchowitz
Blake E. Woodworth
69
25
0
06 Nov 2019
1