ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.08042
  4. Cited By
Mixing ADAM and SGD: a Combined Optimization Method

Mixing ADAM and SGD: a Combined Optimization Method

16 November 2020
Nicola Landro
I. Gallo
Riccardo La Grassa
    ODL
ArXiv (abs)PDFHTML

Papers citing "Mixing ADAM and SGD: a Combined Optimization Method"

8 / 8 papers shown
Efficient training for large-scale optical neural network using an evolutionary strategy and attention pruning
Efficient training for large-scale optical neural network using an evolutionary strategy and attention pruning
Zhiwei Yang
Zeyang Fan
Yihang Lai
Qi Chen
Tian Zhang
Jian Dai
Kun Xu
428
0
0
19 May 2025
FUSE: First-Order and Second-Order Unified SynthEsis in Stochastic Optimization
FUSE: First-Order and Second-Order Unified SynthEsis in Stochastic OptimizationConference on Algebraic Informatics (AI), 2025
Zhanhong Jiang
Md Zahid Hasan
Aditya Balu
Joshua R. Waite
Genyi Huang
Soumik Sarkar
272
0
0
06 Mar 2025
Weber-Fechner Law in Temporal Difference learning derived from Control as Inference
Weber-Fechner Law in Temporal Difference learning derived from Control as InferenceFrontiers in Robotics and AI (Front. Robot. AI), 2024
Keiichiro Takahashi
Taisuke Kobayashi
Tomoya Yamanokuchi
Takamitsu Matsubara
217
0
0
31 Dec 2024
New Insight in Cervical Cancer Diagnosis Using Convolution Neural
  Network Architecture
New Insight in Cervical Cancer Diagnosis Using Convolution Neural Network ArchitectureIAES International Journal of Artificial Intelligence (IJ-AI) (IJ-AI), 2024
Ach. Khozaimi
Wayan Firdaus Mahmudy
133
8
0
23 Oct 2024
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and
  Performance of SGD for Fine-Tuning Language Models
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language ModelsInternational Conference on Learning Representations (ICLR), 2024
Zeman Li
Xinwei Zhang
Peilin Zhong
Yuan Deng
Meisam Razaviyayn
Vahab Mirrokni
328
11
0
09 Oct 2024
Simultaneous Training of First- and Second-Order Optimizers in
  Population-Based Reinforcement Learning
Simultaneous Training of First- and Second-Order Optimizers in Population-Based Reinforcement Learning
Felix Pfeiffer
Shahram Eivazi
305
0
0
27 Aug 2024
Domain adaption and physical constrains transfer learning for shale gas
  production
Domain adaption and physical constrains transfer learning for shale gas production
Zhao-zhong Yang
Liangjie Gou
Chao Min
Duo Yi
Xiaogang Li
Guo-quan Wen
AI4CE
195
0
0
18 Dec 2023
Descending through a Crowded Valley - Benchmarking Deep Learning
  Optimizers
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
Robin M. Schmidt
Frank Schneider
Philipp Hennig
ODL
899
193
0
03 Jul 2020
1
Page 1 of 1