ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.05859
  4. Cited By
Regularizing Meta-Learning via Gradient Dropout

Regularizing Meta-Learning via Gradient Dropout

13 April 2020
Hung-Yu Tseng
Yi-Wen Chen
Yi-Hsuan Tsai
Sifei Liu
Yen-Yu Lin
Ming-Hsuan Yang
ArXivPDFHTML

Papers citing "Regularizing Meta-Learning via Gradient Dropout"

7 / 7 papers shown
Title
A Channel Coding Benchmark for Meta-Learning
A Channel Coding Benchmark for Meta-Learning
Rui Li
Ondrej Bohdal
Rajesh K. Mishra
Hyeji Kim
Da Li
Nicholas D. Lane
Timothy M. Hospedales
33
9
0
15 Jul 2021
Regularizing Generative Adversarial Networks under Limited Data
Regularizing Generative Adversarial Networks under Limited Data
Hung-Yu Tseng
Lu Jiang
Ce Liu
Ming-Hsuan Yang
Weilong Yang
GAN
35
142
0
07 Apr 2021
Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign
  Dropout
Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign Dropout
Zhao Chen
Jiquan Ngiam
Yanping Huang
Thang Luong
Henrik Kretzschmar
Yuning Chai
Dragomir Anguelov
41
207
0
14 Oct 2020
Meta-Learning Requires Meta-Augmentation
Meta-Learning Requires Meta-Augmentation
Janarthanan Rajendran
A. Irpan
Eric Jang
24
93
0
10 Jul 2020
Bayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-Learning
Taesup Kim
Jaesik Yoon
Ousmane Amadou Dia
Sungwoong Kim
Yoshua Bengio
Sungjin Ahn
UQCV
BDL
228
498
0
11 Jun 2018
Probabilistic Model-Agnostic Meta-Learning
Probabilistic Model-Agnostic Meta-Learning
Chelsea Finn
Kelvin Xu
Sergey Levine
BDL
176
666
0
07 Jun 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
457
11,715
0
09 Mar 2017
1