ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.11305
  4. Cited By
On Good Practices for Task-Specific Distillation of Large Pretrained
  Visual Models

On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models

17 February 2024
Juliette Marrie
Michael Arbel
Julien Mairal
Diane Larlus
    VLM
    MQ
ArXivPDFHTML

Papers citing "On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models"

6 / 6 papers shown
Title
Phikon-v2, A large and public feature extractor for biomarker prediction
Phikon-v2, A large and public feature extractor for biomarker prediction
Alexandre Filiot
Paul Jacob
Alice Mac Kain
Charlie Saillard
MedIm
34
17
0
13 Sep 2024
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,412
0
11 Nov 2021
Palette: Image-to-Image Diffusion Models
Palette: Image-to-Image Diffusion Models
Chitwan Saharia
William Chan
Huiwen Chang
Chris A. Lee
Jonathan Ho
Tim Salimans
David J. Fleet
Mohammad Norouzi
DiffM
VLM
325
1,584
0
10 Nov 2021
Learning to Prompt for Vision-Language Models
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
322
2,249
0
02 Sep 2021
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
147
416
0
19 Apr 2021
SEED: Self-supervised Distillation For Visual Representation
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
231
190
0
12 Jan 2021
1