ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.02720
  4. Cited By
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and
  Interpolation Learning
v1v2 (latest)

An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning

4 June 2021
Blake E. Woodworth
Nathan Srebro
ArXiv (abs)PDFHTML

Papers citing "An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning"

5 / 5 papers shown
Title
KerZOO: Kernel Function Informed Zeroth-Order Optimization for Accurate and Accelerated LLM Fine-Tuning
KerZOO: Kernel Function Informed Zeroth-Order Optimization for Accurate and Accelerated LLM Fine-Tuning
Zhendong Mi
Qitao Tan
Xiaodong Yu
Zining Zhu
Geng Yuan
Shaoyi Huang
206
0
0
24 May 2025
Exploring Local Norms in Exp-concave Statistical Learning
Exploring Local Norms in Exp-concave Statistical Learning
Nikita Puchkin
Nikita Zhivotovskiy
153
2
0
21 Feb 2023
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
89
2
0
09 Jan 2023
Private optimization in the interpolation regime: faster rates and
  hardness results
Private optimization in the interpolation regime: faster rates and hardness results
Hilal Asi
Karan N. Chadha
Gary Cheng
John C. Duchi
90
5
0
31 Oct 2022
Faster federated optimization under second-order similarity
Faster federated optimization under second-order similarity
Ahmed Khaled
Chi Jin
FedML
100
19
0
06 Sep 2022
1