ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.09835
  4. Cited By
Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
  Less-Than-Single-Pass Optimization with Nearly Optimal Generalization

Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal Generalization

International Conference on Machine Learning (ICML), 2020
18 September 2020
Pan Zhou
Xiaotong Yuan
ArXiv (abs)PDFHTML

Papers citing "Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal Generalization"

2 / 2 papers shown
Towards Theoretically Understanding Why SGD Generalizes Better Than ADAM
  in Deep Learning
Towards Theoretically Understanding Why SGD Generalizes Better Than ADAM in Deep Learning
Pan Zhou
Jiashi Feng
Chao Ma
Caiming Xiong
Guosheng Lin
E. Weinan
423
277
0
12 Oct 2020
Theory-Inspired Path-Regularized Differential Network Architecture
  Search
Theory-Inspired Path-Regularized Differential Network Architecture Search
Pan Zhou
Caiming Xiong
R. Socher
Guosheng Lin
177
59
0
30 Jun 2020
1
Page 1 of 1