ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.06705
  4. Cited By
Optimizing generalization on the train set: a novel gradient-based
  framework to train parameters and hyperparameters simultaneously

Optimizing generalization on the train set: a novel gradient-based framework to train parameters and hyperparameters simultaneously

11 June 2020
Karim Lounici
Katia Méziani
Benjamin Riu
ArXivPDFHTML

Papers citing "Optimizing generalization on the train set: a novel gradient-based framework to train parameters and hyperparameters simultaneously"

2 / 2 papers shown
Title
A disciplined approach to neural network hyper-parameters: Part 1 --
  learning rate, batch size, momentum, and weight decay
A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
L. Smith
196
1,019
0
26 Mar 2018
Automating biomedical data science through tree-based pipeline
  optimization
Automating biomedical data science through tree-based pipeline optimization
Randal S. Olson
Ryan J. Urbanowicz
Peter C. Andrews
Nicole A. Lavender
L. C. Kidd
J. Moore
AI4CE
29
311
0
28 Jan 2016
1