ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.08770
  4. Cited By
Increasing Both Batch Size and Learning Rate Accelerates Stochastic Gradient Descent

Increasing Both Batch Size and Learning Rate Accelerates Stochastic Gradient Descent

17 February 2025
Hikaru Umeda
Hideaki Iiduka
ArXivPDFHTML

Papers citing "Increasing Both Batch Size and Learning Rate Accelerates Stochastic Gradient Descent"

2 / 2 papers shown
Title
Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum
Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum
Keisuke Kamo
Hideaki Iiduka
62
0
0
15 Jan 2025
Relationship between Batch Size and Number of Steps Needed for Nonconvex
  Optimization of Stochastic Gradient Descent using Armijo Line Search
Relationship between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent using Armijo Line Search
Yuki Tsukada
Hideaki Iiduka
11
0
0
25 Jul 2023
1