Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.07337
Cited By
Transferring Knowledge from Large Foundation Models to Small Downstream Models
11 June 2024
Shikai Qiu
Boran Han
Danielle C. Maddix
Shuai Zhang
Yuyang Wang
Andrew Gordon Wilson
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transferring Knowledge from Large Foundation Models to Small Downstream Models"
6 / 6 papers shown
Title
Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors
Ravid Shwartz-Ziv
Micah Goldblum
Hossein Souri
Sanyam Kapoor
Chen Zhu
Yann LeCun
A. Wilson
UQCV
BDL
54
43
0
20 May 2022
Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model Hubs
Kaichao You
Yong Liu
Ziyang Zhang
Jianmin Wang
Michael I. Jordan
Mingsheng Long
98
30
0
20 Oct 2021
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
239
2,554
0
04 May 2021
Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Mingi Ji
Byeongho Heo
Sungrae Park
56
140
0
05 Feb 2021
A linearized framework and a new benchmark for model selection for fine-tuning
Aditya Deshpande
Alessandro Achille
Avinash Ravichandran
Hao Li
L. Zancato
Charless C. Fowlkes
Rahul Bhotika
Stefano Soatto
Pietro Perona
ALM
105
46
0
29 Jan 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1