ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.01186
  4. Cited By
Cyclical Learning Rates for Training Neural Networks

Cyclical Learning Rates for Training Neural Networks

3 June 2015
L. Smith
    ODL
ArXivPDFHTML

Papers citing "Cyclical Learning Rates for Training Neural Networks"

41 / 41 papers shown
Title
Instance-Adaptive Keypoint Learning with Local-to-Global Geometric Aggregation for Category-Level Object Pose Estimation
Instance-Adaptive Keypoint Learning with Local-to-Global Geometric Aggregation for Category-Level Object Pose Estimation
Wei Wei
Lu Zou
Tao Lu
Yuan Yao
Zhangjin Huang
Guoping Wang
3DPC
70
0
0
21 Apr 2025
Carefully Blending Adversarial Training, Purification, and Aggregation Improves Adversarial Robustness
Carefully Blending Adversarial Training, Purification, and Aggregation Improves Adversarial Robustness
Emanuele Ballarin
A. Ansuini
Luca Bortolussi
AAML
107
0
0
20 Feb 2025
Learn2Mix: Training Neural Networks Using Adaptive Data Integration
Learn2Mix: Training Neural Networks Using Adaptive Data Integration
Shyam Venkatasubramanian
Vahid Tarokh
133
0
0
17 Feb 2025
Generative Adversarial Networks for High-Dimensional Item Factor Analysis: A Deep Adversarial Learning Algorithm
Nanyu Luo
Feng Ji
DRL
65
0
0
15 Feb 2025
SimBEV: A Synthetic Multi-Task Multi-Sensor Driving Data Generation Tool and Dataset
SimBEV: A Synthetic Multi-Task Multi-Sensor Driving Data Generation Tool and Dataset
Goodarz Mehr
A. Eskandarian
193
2
0
04 Feb 2025
EVT: Efficient View Transformation for Multi-Modal 3D Object Detection
Yongjin Lee
Hyeon-Mun Jeong
Yurim Jeon
Sanghyun Kim
74
0
0
16 Nov 2024
ResiDual Transformer Alignment with Spectral Decomposition
ResiDual Transformer Alignment with Spectral Decomposition
Lorenzo Basile
Valentino Maiorca
Luca Bortolussi
Emanuele Rodolà
Francesco Locatello
117
1
0
31 Oct 2024
The Epochal Sawtooth Effect: Unveiling Training Loss Oscillations in Adam and Other Optimizers
The Epochal Sawtooth Effect: Unveiling Training Loss Oscillations in Adam and Other Optimizers
Qi Liu
Wanjing Ma
46
0
0
14 Oct 2024
CRoP: Context-wise Robust Static Human-Sensing Personalization
CRoP: Context-wise Robust Static Human-Sensing Personalization
Sawinder Kaur
Avery Gump
Yi Xiao
Jingyu Xin
Harshit Sharma
Nina R Benway
Jonathan L Preston
Asif Salekin
72
0
0
26 Sep 2024
AutoFlow: An Autoencoder-based Approach for IP Flow Record Compression with Minimal Impact on Traffic Classification
AutoFlow: An Autoencoder-based Approach for IP Flow Record Compression with Minimal Impact on Traffic Classification
Adrian Pekar
46
1
0
17 Sep 2024
Can Learned Optimization Make Reinforcement Learning Less Difficult?
Can Learned Optimization Make Reinforcement Learning Less Difficult?
Alexander David Goldie
Chris Xiaoxuan Lu
Matthew Jackson
Shimon Whiteson
Jakob N. Foerster
77
3
0
09 Jul 2024
A Full Adagrad algorithm with O(Nd) operations
A Full Adagrad algorithm with O(Nd) operations
Antoine Godichon-Baggioni
Wei Lu
Bruno Portier
ODL
75
0
0
03 May 2024
Segmentation Guided Sparse Transformer for Under-Display Camera Image Restoration
Segmentation Guided Sparse Transformer for Under-Display Camera Image Restoration
Jingyun Xue
Tao Wang
Jun Wang
Kaihao Zhang
ViT
61
2
0
09 Mar 2024
On the Byzantine-Resilience of Distillation-Based Federated Learning
On the Byzantine-Resilience of Distillation-Based Federated Learning
Christophe Roux
Max Zimmer
Sebastian Pokutta
AAML
99
1
0
19 Feb 2024
S4Sleep: Elucidating the design space of deep-learning-based sleep stage classification models
S4Sleep: Elucidating the design space of deep-learning-based sleep stage classification models
Tiezhi Wang
Nils Strodthoff
60
5
0
10 Oct 2023
Reflective-Net: Learning from Explanations
Reflective-Net: Learning from Explanations
Johannes Schneider
Michalis Vlachos
FAtt
OffRL
LRM
74
18
0
27 Nov 2020
MeshWalker: Deep Mesh Understanding by Random Walks
MeshWalker: Deep Mesh Understanding by Random Walks
Alon Lahav
A. Tal
3DV
59
82
0
09 Jun 2020
An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms
Sebastian Ruder
ODL
180
6,170
0
15 Sep 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
646
36,599
0
25 Aug 2016
SGDR: Stochastic Gradient Descent with Warm Restarts
SGDR: Stochastic Gradient Descent with Warm Restarts
I. Loshchilov
Frank Hutter
ODL
239
8,030
0
13 Aug 2016
Deep Networks with Stochastic Depth
Deep Networks with Stochastic Depth
Gao Huang
Yu Sun
Zhuang Liu
Daniel Sedra
Kilian Q. Weinberger
158
2,344
0
30 Mar 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
295
10,149
0
16 Mar 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.5K
192,638
0
10 Dec 2015
The Effects of Hyperparameters on SGD Training of Neural Networks
The Effects of Hyperparameters on SGD Training of Neural Networks
Thomas Breuel
92
63
0
12 Aug 2015
An Empirical Evaluation of Deep Learning on Highway Driving
An Empirical Evaluation of Deep Learning on Highway Driving
Brody Huval
Tao Wang
S. Tandon
Jeff Kiske
W. Song
...
Toki Migimatsu
Royce Cheng-Yue
Fernando A. Mujica
Adam Coates
A. Ng
56
600
0
07 Apr 2015
Equilibrated adaptive learning rates for non-convex optimization
Equilibrated adaptive learning rates for non-convex optimization
Yann N. Dauphin
H. D. Vries
Yoshua Bengio
ODL
38
377
0
15 Feb 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
361
43,154
0
11 Feb 2015
ADASECANT: Robust Adaptive Secant Method for Stochastic Gradient
ADASECANT: Robust Adaptive Secant Method for Stochastic Gradient
Çağlar Gülçehre
Marcin Moczulski
Yoshua Bengio
ODL
53
20
0
23 Dec 2014
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
977
149,474
0
22 Dec 2014
Hot Swapping for Online Adaptation of Optimization Hyperparameters
Hot Swapping for Online Adaptation of Optimization Hyperparameters
Kevin Bache
D. DeCoste
Padhraic Smyth
OnRL
37
8
0
20 Dec 2014
Show and Tell: A Neural Image Caption Generator
Show and Tell: A Neural Image Caption Generator
Oriol Vinyals
Alexander Toshev
Samy Bengio
D. Erhan
3DV
192
6,009
0
17 Nov 2014
Going Deeper with Convolutions
Going Deeper with Convolutions
Christian Szegedy
Wei Liu
Yangqing Jia
P. Sermanet
Scott E. Reed
Dragomir Anguelov
D. Erhan
Vincent Vanhoucke
Andrew Rabinovich
344
43,511
0
17 Sep 2014
Sequence to Sequence Learning with Neural Networks
Sequence to Sequence Learning with Neural Networks
Ilya Sutskever
Oriol Vinyals
Quoc V. Le
AIMat
299
20,491
0
10 Sep 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.0K
99,991
0
04 Sep 2014
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
1.2K
39,383
0
01 Sep 2014
Caffe: Convolutional Architecture for Fast Feature Embedding
Caffe: Convolutional Architecture for Fast Feature Embedding
Yangqing Jia
Evan Shelhamer
Jeff Donahue
Sergey Karayev
Jonathan Long
Ross B. Girshick
S. Guadarrama
Trevor Darrell
VLM
BDL
3DV
213
14,703
0
20 Jun 2014
Identifying and attacking the saddle point problem in high-dimensional
  non-convex optimization
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N. Dauphin
Razvan Pascanu
Çağlar Gülçehre
Kyunghyun Cho
Surya Ganguli
Yoshua Bengio
ODL
111
1,380
0
10 Jun 2014
Rich feature hierarchies for accurate object detection and semantic
  segmentation
Rich feature hierarchies for accurate object detection and semantic segmentation
Ross B. Girshick
Jeff Donahue
Trevor Darrell
Jitendra Malik
ObjD
229
26,122
0
11 Nov 2013
ADADELTA: An Adaptive Learning Rate Method
ADADELTA: An Adaptive Learning Rate Method
Matthew D. Zeiler
ODL
115
6,619
0
22 Dec 2012
Practical recommendations for gradient-based training of deep
  architectures
Practical recommendations for gradient-based training of deep architectures
Yoshua Bengio
3DH
ODL
150
2,195
0
24 Jun 2012
No More Pesky Learning Rates
No More Pesky Learning Rates
Tom Schaul
Sixin Zhang
Yann LeCun
108
477
0
06 Jun 2012
1