ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.12213
  4. Cited By
Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic
  Optimization

Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization

24 September 2021
Raghu Bollapragada
Stefan M. Wild
ArXivPDFHTML

Papers citing "Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization"

4 / 4 papers shown
Title
Adaptive Batch Size Schedules for Distributed Training of Language Models with Data and Model Parallelism
Adaptive Batch Size Schedules for Distributed Training of Language Models with Data and Model Parallelism
Tim Tsz-Kit Lau
Weijian Li
Chenwei Xu
Han Liu
Mladen Kolar
70
0
0
30 Dec 2024
A Historical Trajectory Assisted Optimization Method for Zeroth-Order
  Federated Learning
A Historical Trajectory Assisted Optimization Method for Zeroth-Order Federated Learning
Chenlin Wu
Xiaoyu He
Zike Li
Zibin Zheng
Zibin Zheng
FedML
24
0
0
24 Sep 2024
AdAdaGrad: Adaptive Batch Size Schemes for Adaptive Gradient Methods
AdAdaGrad: Adaptive Batch Size Schemes for Adaptive Gradient Methods
Tim Tsz-Kit Lau
Han Liu
Mladen Kolar
ODL
24
6
0
17 Feb 2024
PyPop7: A Pure-Python Library for Population-Based Black-Box
  Optimization
PyPop7: A Pure-Python Library for Population-Based Black-Box Optimization
Qiqi Duan
Guochen Zhou
Chang Shao
Zhuowei Wang
Mingyang Feng
Yuwei Huang
Yajing Tan
Yijun Yang
Qi Zhao
Yuhui Shi
23
5
0
12 Dec 2022
1