ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.08531
  4. Cited By
Black-box Prompt Learning for Pre-trained Language Models

Black-box Prompt Learning for Pre-trained Language Models

21 January 2022
Shizhe Diao
Zhichao Huang
Ruijia Xu
Xuechun Li
Yong Lin
Xiao Zhou
Tong Zhang
    VLM
    AAML
ArXivPDFHTML

Papers citing "Black-box Prompt Learning for Pre-trained Language Models"

12 / 12 papers shown
Title
FLOPS: Forward Learning with OPtimal Sampling
FLOPS: Forward Learning with OPtimal Sampling
Tao Ren
Zishi Zhang
Jinyang Jiang
Guanghao Li
Zeliang Zhang
Mingqian Feng
Yijie Peng
26
1
0
08 Oct 2024
CPT: Consistent Proxy Tuning for Black-box Optimization
CPT: Consistent Proxy Tuning for Black-box Optimization
Yuanyang He
Zitong Huang
Xinxing Xu
Rick Siow Mong Goh
Salman Khan
W. Zuo
Yong Liu
Chun-Mei Feng
18
0
0
01 Jul 2024
LLM2FEA: Discover Novel Designs with Generative Evolutionary
  Multitasking
LLM2FEA: Discover Novel Designs with Generative Evolutionary Multitasking
Melvin Wong
Jiao Liu
Thiago Rios
Stefan Menzel
Yew-Soon Ong
27
2
0
21 Jun 2024
A Bayesian approach for prompt optimization in pre-trained language
  models
A Bayesian approach for prompt optimization in pre-trained language models
Antonio Sabbatella
Andrea Ponti
Antonio Candelieri
I. Giordani
F. Archetti
8
1
0
01 Dec 2023
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
131
276
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
228
780
0
14 Oct 2021
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
274
882
0
18 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
243
340
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
238
1,898
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
248
1,382
0
21 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1