ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.00865
  4. Cited By
Fast and Efficient Local Search for Genetic Programming Based Loss
  Function Learning

Fast and Efficient Local Search for Genetic Programming Based Loss Function Learning

1 March 2024
Christian Raymond
Qi Chen
Bing Xue
Mengjie Zhang
ArXivPDFHTML

Papers citing "Fast and Efficient Local Search for Genetic Programming Based Loss Function Learning"

6 / 6 papers shown
Title
The Curse of Unrolling: Rate of Differentiating Through Optimization
The Curse of Unrolling: Rate of Differentiating Through Optimization
Damien Scieur
Quentin Bertrand
Gauthier Gidel
Fabian Pedregosa
36
11
0
27 Sep 2022
Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning
Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning
Christian Raymond
Qi Chen
Bing Xue
Mengjie Zhang
FedML
27
11
0
19 Sep 2022
Gradient-based Bi-level Optimization for Deep Learning: A Survey
Gradient-based Bi-level Optimization for Deep Learning: A Survey
Can Chen
Xiangshan Chen
Chen-li Ma
Zixuan Liu
Xue Liu
84
35
0
24 Jul 2022
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Luca Franceschi
P. Frasconi
Saverio Salzo
Riccardo Grazzi
Massimiliano Pontil
104
716
0
13 Jun 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
317
11,681
0
09 Mar 2017
Forward and Reverse Gradient-Based Hyperparameter Optimization
Forward and Reverse Gradient-Based Hyperparameter Optimization
Luca Franceschi
Michele Donini
P. Frasconi
Massimiliano Pontil
127
406
0
06 Mar 2017
1