ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.07904
  4. Cited By
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

15 October 2021
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
    VLM
    LRM
ArXivPDFHTML

Papers citing "SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer"

10 / 10 papers shown
Title
Efficient Knowledge Transfer in Multi-Task Learning through Task-Adaptive Low-Rank Representation
Efficient Knowledge Transfer in Multi-Task Learning through Task-Adaptive Low-Rank Representation
Xiao Zhang
Kangsheng Wang
Tianyu Hu
Huimin Ma
19
0
0
20 Apr 2025
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Enming Zhang
Liwen Cao
Yanru Wu
Zijie Zhao
Guan Wang
Yang Li
31
0
0
09 Apr 2025
STraTA: Self-Training with Task Augmentation for Better Few-shot
  Learning
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Tu Vu
Minh-Thang Luong
Quoc V. Le
Grady Simon
Mohit Iyyer
104
53
0
13 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
254
2,999
0
18 Apr 2021
The GEM Benchmark: Natural Language Generation, its Evaluation and
  Metrics
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Sebastian Gehrmann
Tosin P. Adewumi
Karmanya Aggarwal
Pawan Sasanka Ammanamanchi
Aremu Anuoluwapo
...
Nishant Subramani
Wei-ping Xu
Diyi Yang
Akhila Yerukola
Jiawei Zhou
VLM
224
254
0
02 Feb 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
232
302
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
223
1,649
0
31 Dec 2020
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
115
345
0
23 Jul 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
267
6,003
0
20 Apr 2018
Teaching Machines to Read and Comprehend
Teaching Machines to Read and Comprehend
Karl Moritz Hermann
Tomás Kociský
Edward Grefenstette
L. Espeholt
W. Kay
Mustafa Suleyman
Phil Blunsom
155
3,357
0
10 Jun 2015
1