ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1601.00917
  4. Cited By
DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing
  Hyperparameters of Deep Neural Networks

DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks

5 January 2016
Jie Fu
Hongyin Luo
Jiashi Feng
K. H. Low
Tat-Seng Chua
ArXivPDFHTML

Papers citing "DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks"

13 / 13 papers shown
Title
A Globally Convergent Gradient-based Bilevel Hyperparameter Optimization
  Method
A Globally Convergent Gradient-based Bilevel Hyperparameter Optimization Method
Ankur Sinha
Satender Gunwal
Shivam Kumar
20
2
0
25 Aug 2022
Learning the Effect of Registration Hyperparameters with HyperMorph
Learning the Effect of Registration Hyperparameters with HyperMorph
Andrew Hoopes
Malte Hoffmann
Douglas N. Greve
Bruce Fischl
John Guttag
Adrian Dalca
35
39
0
30 Mar 2022
Adaptive Gradient Methods with Local Guarantees
Adaptive Gradient Methods with Local Guarantees
Zhou Lu
Wenhan Xia
Sanjeev Arora
Elad Hazan
ODL
32
9
0
02 Mar 2022
Online Hyperparameter Meta-Learning with Hypergradient Distillation
Online Hyperparameter Meta-Learning with Hypergradient Distillation
Haebeom Lee
Hayeon Lee
Jaewoong Shin
Eunho Yang
Timothy M. Hospedales
Sung Ju Hwang
DD
42
2
0
06 Oct 2021
Efficient Hyperparameter Optimization in Deep Learning Using a Variable
  Length Genetic Algorithm
Efficient Hyperparameter Optimization in Deep Learning Using a Variable Length Genetic Algorithm
Xueli Xiao
Ming Yan
S. Basodi
Chunyan Ji
Yi Pan
27
90
0
23 Jun 2020
Meta-Learning in Neural Networks: A Survey
Meta-Learning in Neural Networks: A Survey
Timothy M. Hospedales
Antreas Antoniou
P. Micaelli
Amos Storkey
OOD
100
1,939
0
11 Apr 2020
Optimizing Millions of Hyperparameters by Implicit Differentiation
Optimizing Millions of Hyperparameters by Implicit Differentiation
Jonathan Lorraine
Paul Vicol
David Duvenaud
DD
45
404
0
06 Nov 2019
Reducing The Search Space For Hyperparameter Optimization Using Group
  Sparsity
Reducing The Search Space For Hyperparameter Optimization Using Group Sparsity
Minsu Cho
Chinmay Hegde
27
11
0
24 Apr 2019
Least Squares Auto-Tuning
Least Squares Auto-Tuning
Shane T. Barratt
Stephen P. Boyd
MoMe
24
23
0
10 Apr 2019
Stochastic Hyperparameter Optimization through Hypernetworks
Stochastic Hyperparameter Optimization through Hypernetworks
Jonathan Lorraine
David Duvenaud
47
139
0
26 Feb 2018
Hyperparameter Optimization: A Spectral Approach
Hyperparameter Optimization: A Spectral Approach
Elad Hazan
Adam R. Klivans
Yang Yuan
33
118
0
02 Jun 2017
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
186
1,186
0
30 Nov 2014
Joint Training of Deep Boltzmann Machines
Joint Training of Deep Boltzmann Machines
Ian Goodfellow
Aaron Courville
Yoshua Bengio
FedML
83
28
0
12 Dec 2012
1