ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.05843
  4. Cited By
How Important is the Train-Validation Split in Meta-Learning?

How Important is the Train-Validation Split in Meta-Learning?

12 October 2020
Yu Bai
Minshuo Chen
Pan Zhou
T. Zhao
Jason D. Lee
Sham Kakade
Haiquan Wang
Caiming Xiong
ArXivPDFHTML

Papers citing "How Important is the Train-Validation Split in Meta-Learning?"

17 / 17 papers shown
Title
Bayes meets Bernstein at the Meta Level: an Analysis of Fast Rates in
  Meta-Learning with PAC-Bayes
Bayes meets Bernstein at the Meta Level: an Analysis of Fast Rates in Meta-Learning with PAC-Bayes
Charles Riou
Pierre Alquier
Badr-Eddine Chérief-Abdellatif
64
10
0
23 Feb 2023
Convergence of First-Order Algorithms for Meta-Learning with Moreau
  Envelopes
Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes
Konstantin Mishchenko
Slavomír Hanzely
Peter Richtárik
FedML
37
5
0
17 Jan 2023
TaskMix: Data Augmentation for Meta-Learning of Spoken Intent
  Understanding
TaskMix: Data Augmentation for Meta-Learning of Spoken Intent Understanding
Surya Kant Sahu
33
0
0
26 Sep 2022
Meta-RegGNN: Predicting Verbal and Full-Scale Intelligence Scores using
  Graph Neural Networks and Meta-Learning
Meta-RegGNN: Predicting Verbal and Full-Scale Intelligence Scores using Graph Neural Networks and Meta-Learning
Imen Jegham
I. Rekik
34
3
0
14 Sep 2022
Betty: An Automatic Differentiation Library for Multilevel Optimization
Betty: An Automatic Differentiation Library for Multilevel Optimization
Sang Keun Choe
Willie Neiswanger
P. Xie
Eric P. Xing
AI4CE
41
30
0
05 Jul 2022
Provable Generalization of Overparameterized Meta-learning Trained with
  SGD
Provable Generalization of Overparameterized Meta-learning Trained with SGD
Yu Huang
Yingbin Liang
Longbo Huang
MLT
37
8
0
18 Jun 2022
Global Convergence of MAML and Theory-Inspired Neural Architecture
  Search for Few-Shot Learning
Global Convergence of MAML and Theory-Inspired Neural Architecture Search for Few-Shot Learning
Haoxiang Wang
Yite Wang
Ruoyu Sun
Yue Liu
40
28
0
17 Mar 2022
AutoBalance: Optimized Loss Functions for Imbalanced Data
AutoBalance: Optimized Loss Functions for Imbalanced Data
Mingchen Li
Xuechen Zhang
Christos Thrampoulidis
Jiasi Chen
Samet Oymak
24
67
0
04 Jan 2022
ALP: Data Augmentation using Lexicalized PCFGs for Few-Shot Text
  Classification
ALP: Data Augmentation using Lexicalized PCFGs for Few-Shot Text Classification
Hazel Kim
Daecheol Woo
Seong Joon Oh
Jeong-Won Cha
Yo-Sub Han
328
33
0
16 Dec 2021
Generalization Bounds For Meta-Learning: An Information-Theoretic
  Analysis
Generalization Bounds For Meta-Learning: An Information-Theoretic Analysis
Qi Chen
Changjian Shui
M. Marchand
45
43
0
29 Sep 2021
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient
  Training and Effective Adaptation
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation
Haoxiang Wang
Han Zhao
Yue Liu
37
88
0
16 Jun 2021
Generalization Guarantees for Neural Architecture Search with
  Train-Validation Split
Generalization Guarantees for Neural Architecture Search with Train-Validation Split
Samet Oymak
Mingchen Li
Mahdi Soltanolkotabi
AI4CE
OOD
41
13
0
29 Apr 2021
Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution
Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution
Amrith Rajagopal Setlur
Oscar Li
Virginia Smith
38
13
0
23 Feb 2021
On Episodes, Prototypical Networks, and Few-shot Learning
On Episodes, Prototypical Networks, and Few-shot Learning
Steinar Laenen
Luca Bertinetto
20
98
0
17 Dec 2020
PAC-Bayes meta-learning with implicit task-specific posteriors
PAC-Bayes meta-learning with implicit task-specific posteriors
Cuong C. Nguyen
Thanh-Toan Do
G. Carneiro
BDL
59
7
0
05 Mar 2020
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness
  of MAML
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
217
640
0
19 Sep 2019
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Luca Franceschi
P. Frasconi
Saverio Salzo
Riccardo Grazzi
Massimiliano Pontil
115
718
0
13 Jun 2018
1