ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.12471
  4. Cited By
Learning a Better Initialization for Soft Prompts via Meta-Learning

Learning a Better Initialization for Soft Prompts via Meta-Learning

25 May 2022
Yukun Huang
Kun Qian
Zhou Yu
    VLM
ArXivPDFHTML

Papers citing "Learning a Better Initialization for Soft Prompts via Meta-Learning"

10 / 10 papers shown
Title
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Enming Zhang
Liwen Cao
Yanru Wu
Zijie Zhao
Guan Wang
Yang Li
45
0
0
09 Apr 2025
PII-Scope: A Benchmark for Training Data PII Leakage Assessment in LLMs
PII-Scope: A Benchmark for Training Data PII Leakage Assessment in LLMs
K. K. Nakka
Ahmed Frikha
Ricardo Mendes
Xue Jiang
Xuebing Zhou
21
1
0
09 Oct 2024
Black-box Prompt Tuning with Subspace Learning
Black-box Prompt Tuning with Subspace Learning
Yuanhang Zheng
Zhixing Tan
Peng Li
Yang Liu
VLM
43
9
0
04 May 2023
Self-supervised Meta-Prompt Learning with Meta-Gradient Regularization
  for Few-shot Generalization
Self-supervised Meta-Prompt Learning with Meta-Gradient Regularization for Few-shot Generalization
Kaihang Pan
Juncheng Billy Li
Hongye Song
Jun Lin
Xiaozhong Liu
Siliang Tang
OffRL
19
10
0
22 Mar 2023
Learning to Initialize: Can Meta Learning Improve Cross-task
  Generalization in Prompt Tuning?
Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?
Chengwei Qin
Q. Li
Ruochen Zhao
Shafiq R. Joty
VLM
LRM
8
14
0
16 Feb 2023
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
780
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
237
11,568
0
09 Mar 2017
1