ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.00788
  4. Cited By
A Robust Adaptive Stochastic Gradient Method for Deep Learning

A Robust Adaptive Stochastic Gradient Method for Deep Learning

2 March 2017
Çağlar Gülçehre
Jose M. R. Sotelo
Marcin Moczulski
Yoshua Bengio
    ODL
ArXivPDFHTML

Papers citing "A Robust Adaptive Stochastic Gradient Method for Deep Learning"

6 / 6 papers shown
Title
Impact of Learning Rate on Noise Resistant Property of Deep Learning
  Models
Impact of Learning Rate on Noise Resistant Property of Deep Learning Models
Omobayode Fagbohungbe
Lijun Qian
32
3
0
08 May 2022
AWEU-Net: An Attention-Aware Weight Excitation U-Net for Lung Nodule
  Segmentation
AWEU-Net: An Attention-Aware Weight Excitation U-Net for Lung Nodule Segmentation
Syeda Furruka Banu
Md. Mostafa Kamal Sarker
M. Abdel-Nasser
D. Puig
Hatem A. Rashwan
28
22
0
11 Oct 2021
TAdam: A Robust Stochastic Gradient Optimizer
TAdam: A Robust Stochastic Gradient Optimizer
Wendyam Eric Lionel Ilboudo
Taisuke Kobayashi
Kenji Sugimoto
ODL
22
12
0
29 Feb 2020
Empirical study towards understanding line search approximations for
  training neural networks
Empirical study towards understanding line search approximations for training neural networks
Younghwan Chae
D. Wilke
27
11
0
15 Sep 2019
Super-Convergence: Very Fast Training of Neural Networks Using Large
  Learning Rates
Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates
L. Smith
Nicholay Topin
AI4CE
25
519
0
23 Aug 2017
Curriculum Dropout
Curriculum Dropout
Pietro Morerio
Jacopo Cavazza
Riccardo Volpi
René Vidal
Vittorio Murino
ODL
28
101
0
18 Mar 2017
1