ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.11059
  4. Cited By
Uniform Convergence of Gradients for Non-Convex Learning and
  Optimization

Uniform Convergence of Gradients for Non-Convex Learning and Optimization

25 October 2018
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
ArXivPDFHTML

Papers citing "Uniform Convergence of Gradients for Non-Convex Learning and Optimization"

13 / 13 papers shown
Title
Loss Gradient Gaussian Width based Generalization and Optimization Guarantees
Loss Gradient Gaussian Width based Generalization and Optimization Guarantees
A. Banerjee
Qiaobo Li
Yingxue Zhou
49
0
0
11 Jun 2024
Machine Learning and the Future of Bayesian Computation
Machine Learning and the Future of Bayesian Computation
Steven Winter
Trevor Campbell
Lizhen Lin
Sanvesh Srivastava
David B. Dunson
TPM
45
4
0
21 Apr 2023
Unified Convergence Theory of Stochastic and Variance-Reduced Cubic
  Newton Methods
Unified Convergence Theory of Stochastic and Variance-Reduced Cubic Newton Methods
El Mahdi Chayti
N. Doikov
Martin Jaggi
ODL
27
5
0
23 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
164
67
0
27 Oct 2022
Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
  with List Stability
Exploring the Algorithm-Dependent Generalization of AUPRC Optimization with List Stability
Peisong Wen
Qianqian Xu
Zhiyong Yang
Yuan He
Qingming Huang
53
10
0
27 Sep 2022
Thinking Outside the Ball: Optimal Learning with Gradient Descent for
  Generalized Linear Stochastic Convex Optimization
Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
I Zaghloul Amir
Roi Livni
Nathan Srebro
27
6
0
27 Feb 2022
Improved Learning Rates for Stochastic Optimization: Two Theoretical
  Viewpoints
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
20
13
0
19 Jul 2021
Generalization Guarantees for Neural Architecture Search with
  Train-Validation Split
Generalization Guarantees for Neural Architecture Search with Train-Validation Split
Samet Oymak
Mingchen Li
Mahdi Soltanolkotabi
AI4CE
OOD
36
13
0
29 Apr 2021
Decentralized Federated Averaging
Decentralized Federated Averaging
Tao Sun
Dongsheng Li
Bao Wang
FedML
45
207
0
23 Apr 2021
A Stochastic Subgradient Method for Distributionally Robust Non-Convex
  Learning
A Stochastic Subgradient Method for Distributionally Robust Non-Convex Learning
Mert Gurbuzbalaban
A. Ruszczynski
Landi Zhu
23
9
0
08 Jun 2020
Stopping Criteria for, and Strong Convergence of, Stochastic Gradient
  Descent on Bottou-Curtis-Nocedal Functions
Stopping Criteria for, and Strong Convergence of, Stochastic Gradient Descent on Bottou-Curtis-Nocedal Functions
V. Patel
18
23
0
01 Apr 2020
Graphical Convergence of Subgradients in Nonconvex Optimization and
  Learning
Graphical Convergence of Subgradients in Nonconvex Optimization and Learning
Damek Davis
Dmitriy Drusvyatskiy
16
26
0
17 Oct 2018
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,199
0
16 Aug 2016
1