ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.13841
  4. Cited By
Is Pre-training Truly Better Than Meta-Learning?

Is Pre-training Truly Better Than Meta-Learning?

24 June 2023
Brando Miranda
P. Yu
Saumya Goyal
Yu-xiong Wang
Oluwasanmi Koyejo
ArXivPDFHTML

Papers citing "Is Pre-training Truly Better Than Meta-Learning?"

8 / 8 papers shown
Title
Analyzing LLMs' Knowledge Boundary Cognition Across Languages Through the Lens of Internal Representations
Analyzing LLMs' Knowledge Boundary Cognition Across Languages Through the Lens of Internal Representations
Chenghao Xiao
Hou Pong Chan
Hao Zhang
Mahani Aljunied
Lidong Bing
Noura Al Moubayed
Yu Rong
58
0
0
18 Apr 2025
DRESS: Disentangled Representation-based Self-Supervised Meta-Learning for Diverse Tasks
Wei Cui
Tongzi Wu
Jesse C. Cresswell
Yi Sui
Keyvan Golestan
55
0
0
12 Mar 2025
DELAUNAY: a dataset of abstract art for psychophysical and machine
  learning research
DELAUNAY: a dataset of abstract art for psychophysical and machine learning research
C. Gontier
Jakob Jordan
Mihai A. Petrovici
17
2
0
28 Jan 2022
Generalization Bounds For Meta-Learning: An Information-Theoretic
  Analysis
Generalization Bounds For Meta-Learning: An Information-Theoretic Analysis
Qi Chen
Changjian Shui
M. Marchand
35
43
0
29 Sep 2021
The Role of Global Labels in Few-Shot Classification and How to Infer
  Them
The Role of Global Labels in Few-Shot Classification and How to Infer Them
Ruohan Wang
Massimiliano Pontil
C. Ciliberto
VLM
24
16
0
09 Aug 2021
The Advantage of Conditional Meta-Learning for Biased Regularization and
  Fine-Tuning
The Advantage of Conditional Meta-Learning for Biased Regularization and Fine-Tuning
Giulia Denevi
Massimiliano Pontil
C. Ciliberto
32
39
0
25 Aug 2020
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness
  of MAML
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
170
634
0
19 Sep 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,568
0
09 Mar 2017
1