Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.09569
Cited By
Strings from the Library of Babel: Random Sampling as a Strong Baseline for Prompt Optimisation
16 November 2023
Yao Lu
Jiayi Wang
Raphael Tang
Sebastian Riedel
Pontus Stenetorp
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Strings from the Library of Babel: Random Sampling as a Strong Baseline for Prompt Optimisation"
7 / 7 papers shown
Title
Mixtures of In-Context Learners
Giwon Hong
Emile van Krieken
E. Ponti
Nikolay Malkin
Pasquale Minervini
26
0
0
05 Nov 2024
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
274
882
0
18 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
238
1,898
0
31 Dec 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
391
2,216
0
03 Sep 2019
1