ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.16261
  4. Cited By
AdapterDistillation: Non-Destructive Task Composition with Knowledge
  Distillation

AdapterDistillation: Non-Destructive Task Composition with Knowledge Distillation

26 December 2023
Junjie Wang
Yicheng Chen
Wangshu Zhang
Sen Hu
Teng Xu
Jing Zheng
    VLM
ArXivPDFHTML

Papers citing "AdapterDistillation: Non-Destructive Task Composition with Knowledge Distillation"

2 / 2 papers shown
Title
No Train but Gain: Language Arithmetic for training-free Language
  Adapters enhancement
No Train but Gain: Language Arithmetic for training-free Language Adapters enhancement
Mateusz Klimaszewski
Piotr Andruszkiewicz
Alexandra Birch
MoMe
42
4
0
24 Apr 2024
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,844
0
18 Apr 2021
1