ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.01427
  4. Cited By
Stochastic Gradient Descent: Going As Fast As Possible But Not Faster

Stochastic Gradient Descent: Going As Fast As Possible But Not Faster

5 September 2017
Alice Schoenauer Sebag
Marc Schoenauer
Michèle Sebag
ArXiv (abs)PDFHTML

Papers citing "Stochastic Gradient Descent: Going As Fast As Possible But Not Faster"

4 / 4 papers shown
Cumulative Learning Rate Adaptation: Revisiting Path-Based Schedules for SGD and Adam
Cumulative Learning Rate Adaptation: Revisiting Path-Based Schedules for SGD and Adam
Asma Atamna
Tom Maus
Fabian Kievelitz
Tobias Glasmachers
68
0
0
07 Aug 2025
Flexible numerical optimization with ensmallen
Flexible numerical optimization with ensmallen
Ryan R. Curtin
Marcus Edel
Rahul Prabhu
S. Basak
Zhihao Lou
Conrad Sanderson
177
1
0
09 Mar 2020
Painless Stochastic Gradient: Interpolation, Line-Search, and
  Convergence Rates
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence RatesNeural Information Processing Systems (NeurIPS), 2019
Sharan Vaswani
Aaron Mishkin
I. Laradji
Mark Schmidt
Gauthier Gidel
Damien Scieur
ODL
441
231
0
24 May 2019
MACNet: Multi-scale Atrous Convolution Networks for Food Places
  Classification in Egocentric Photo-streams
MACNet: Multi-scale Atrous Convolution Networks for Food Places Classification in Egocentric Photo-streams
Md. Mostafa Kamal Sarker
Hatem A. Rashwan
Estefanía Talavera
Syeda Furruka Banu
Petia Radeva
D. Puig
EgoV
193
13
0
29 Aug 2018
1
Page 1 of 1