ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.02952
  4. Cited By
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

6 October 2022
Xu Guo
Boyang Albert Li
Han Yu
    VLM
ArXivPDFHTML

Papers citing "Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation"

11 / 11 papers shown
Title
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization
  for Language Models
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models
Chengzhengxu Li
Xiaoming Liu
Zhaohan Zhang
Yichen Wang
Chen Liu
Y. Lan
Chao Shen
29
2
0
15 Jun 2024
How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?
How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?
Rheeya Uppaal
Yixuan Li
Junjie Hu
25
4
0
31 Jan 2024
Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion
  Recognition
Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion Recognition
Yige Xu
Zhiwei Zeng
Zhiqi Shen
VLM
10
2
0
23 Oct 2023
Domain Confused Contrastive Learning for Unsupervised Domain Adaptation
Domain Confused Contrastive Learning for Unsupervised Domain Adaptation
Quanyu Long
Tianze Luo
Wenya Wang
Sinno Jialin Pan
38
8
0
10 Jul 2022
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
131
276
0
15 Oct 2021
Effectiveness of Pre-training for Few-shot Intent Classification
Effectiveness of Pre-training for Few-shot Intent Classification
Haode Zhang
Yuwei Zhang
Li-Ming Zhan
Jiaxin Chen
Guangyuan Shi
Xiao-Ming Wu
Albert Y. S. Lam
VLM
72
44
0
13 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
243
340
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
238
1,898
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
248
1,382
0
21 Jan 2020
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
205
430
0
25 Sep 2019
1