ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.09326
  4. Cited By
Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision
  Transformers

Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision Transformers

14 April 2024
Diana-Nicoleta Grigore
Mariana-Iuliana Georgescu
J. A. Justo
T. Johansen
Andreea-Iuliana Ionescu
Radu Tudor Ionescu
ArXivPDFHTML

Papers citing "Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision Transformers"

4 / 4 papers shown
Title
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
Simian Luo
Yiqin Tan
Suraj Patil
Daniel Gu
Patrick von Platen
Apolinário Passos
Longbo Huang
Jian Li
Hang Zhao
MoMe
108
139
0
09 Nov 2023
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
F. Khan
M. Shah
ViT
222
2,404
0
04 Jan 2021
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data
  Efficiency and Imperfect Teacher
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Guangda Ji
Zhanxing Zhu
43
32
0
20 Oct 2020
1