ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.11890
  4. Cited By
TEMPERA: Test-Time Prompting via Reinforcement Learning

TEMPERA: Test-Time Prompting via Reinforcement Learning

21 November 2022
Tianjun Zhang
Xuezhi Wang
Denny Zhou
Dale Schuurmans
Joseph E. Gonzalez
    VLM
ArXivPDFHTML

Papers citing "TEMPERA: Test-Time Prompting via Reinforcement Learning"

14 / 14 papers shown
Title
TAPO: Task-Referenced Adaptation for Prompt Optimization
TAPO: Task-Referenced Adaptation for Prompt Optimization
Wenxin Luo
W. Wang
Xiaopeng Li
Weibo Zhou
Pengyue Jia
Xiangyu Zhao
45
0
0
12 Jan 2025
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization
  for Language Models
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models
Chengzhengxu Li
Xiaoming Liu
Zhaohan Zhang
Yichen Wang
Chen Liu
Y. Lan
Chao Shen
35
2
0
15 Jun 2024
A Bayesian approach for prompt optimization in pre-trained language
  models
A Bayesian approach for prompt optimization in pre-trained language models
Antonio Sabbatella
Andrea Ponti
Antonio Candelieri
I. Giordani
F. Archetti
8
1
0
01 Dec 2023
AutoHint: Automatic Prompt Optimization with Hint Generation
AutoHint: Automatic Prompt Optimization with Hint Generation
Hong Sun
Xue Li
Yi Xu
Youkow Homma
Qinhao Cao
Min-man Wu
Jian Jiao
Denis Xavier Charles
24
23
0
13 Jul 2023
Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Lingfeng Shen
Weiting Tan
Boyuan Zheng
Daniel Khashabi
VLM
22
6
0
18 May 2023
In-context Example Selection with Influences
In-context Example Selection with Influences
Nguyen Tai
Eric Wong
9
48
0
21 Feb 2023
PromptSource: An Integrated Development Environment and Repository for
  Natural Language Prompts
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Stephen H. Bach
Victor Sanh
Zheng-Xin Yong
Albert Webson
Colin Raffel
...
Khalid Almubarak
Xiangru Tang
Dragomir R. Radev
Mike Tian-Jian Jiang
Alexander M. Rush
VLM
215
335
0
02 Feb 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
780
0
14 Oct 2021
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
274
1,114
0
18 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
What Makes Good In-Context Examples for GPT-$3$?
What Makes Good In-Context Examples for GPT-333?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,296
0
17 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
256
1,584
0
21 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1