Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2111.02080
Cited By
An Explanation of In-context Learning as Implicit Bayesian Inference
3 November 2021
Sang Michael Xie
Aditi Raghunathan
Percy Liang
Tengyu Ma
ReLM
BDL
VPVLM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"An Explanation of In-context Learning as Implicit Bayesian Inference"
31 / 531 papers shown
Title
Robustness of Demonstration-based Learning Under Limited Data Scenario
Hongxin Zhang
Yanzhe Zhang
Ruiyi Zhang
Diyi Yang
30
13
0
19 Oct 2022
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Mirac Suzgun
Nathan Scales
Nathanael Scharli
Sebastian Gehrmann
Yi Tay
...
Aakanksha Chowdhery
Quoc V. Le
Ed H. Chi
Denny Zhou
Jason W. Wei
ALM
ELM
LRM
ReLM
63
984
0
17 Oct 2022
Bootstrapping Multilingual Semantic Parsers using Large Language Models
Abhijeet Awasthi
Nitish Gupta
Bidisha Samanta
Shachi Dave
Sunita Sarawagi
Partha P. Talukdar
16
7
0
13 Oct 2022
Why self-attention is Natural for Sequence-to-Sequence Problems? A Perspective from Symmetries
Chao Ma
Lexing Ying
8
2
0
13 Oct 2022
Formal Semantic Geometry over Transformer-based Variational AutoEncoder
Yingji Zhang
Danilo S. Carvalho
Ian Pratt-Hartmann
André Freitas
16
4
0
12 Oct 2022
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Seonghyeon Ye
Joel Jang
Doyoung Kim
Yongrae Jo
Minjoon Seo
VLM
24
2
0
06 Oct 2022
ThinkSum: Probabilistic reasoning over sets using large language models
Batu Mehmet Ozturkler
Nikolay Malkin
Zhen Wang
Nebojsa Jojic
ReLM
LRM
36
22
0
04 Oct 2022
Complexity-Based Prompting for Multi-Step Reasoning
Yao Fu
Hao-Chun Peng
Ashish Sabharwal
Peter Clark
Tushar Khot
ReLM
LRM
162
411
0
03 Oct 2022
What Makes Pre-trained Language Models Better Zero-shot Learners?
Jinghui Lu
Dongsheng Zhu
Weidong Han
Rui Zhao
Brian Mac Namee
Fei Tan
34
21
0
30 Sep 2022
On the Relation between Sensitivity and Accuracy in In-context Learning
Yanda Chen
Chen Zhao
Zhou Yu
Kathleen McKeown
He He
180
77
0
16 Sep 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
19
446
0
01 Aug 2022
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
Dylan Slack
Satyapriya Krishna
Himabindu Lakkaraju
Sameer Singh
16
73
0
08 Jul 2022
Emergent Abilities of Large Language Models
Jason W. Wei
Yi Tay
Rishi Bommasani
Colin Raffel
Barret Zoph
...
Tatsunori Hashimoto
Oriol Vinyals
Percy Liang
J. Dean
W. Fedus
ELM
ReLM
LRM
37
2,309
0
15 Jun 2022
AANG: Automating Auxiliary Learning
Lucio Dery
Paul Michel
M. Khodak
Graham Neubig
Ameet Talwalkar
34
9
0
27 May 2022
Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Kang Min Yoo
Junyeob Kim
Hyuhng Joon Kim
Hyunsoo Cho
Hwiyeol Jo
Sang-Woo Lee
Sang-goo Lee
Taeuk Kim
15
123
0
25 May 2022
Prototypical Calibration for Few-shot Learning of Language Models
Zhixiong Han
Y. Hao
Li Dong
Yutao Sun
Furu Wei
168
52
0
20 May 2022
On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model
Seongjin Shin
Sang-Woo Lee
Hwijeen Ahn
Sungdong Kim
Hyoungseok Kim
...
Kyunghyun Cho
Gichang Lee
W. Park
Jung-Woo Ha
Nako Sung
LRM
14
93
0
28 Apr 2022
Data Distributional Properties Drive Emergent In-Context Learning in Transformers
Stephanie C. Y. Chan
Adam Santoro
Andrew Kyle Lampinen
Jane X. Wang
Aaditya K. Singh
Pierre Harvey Richemond
J. Mcclelland
Felix Hill
43
243
0
22 Apr 2022
Can language models learn from explanations in context?
Andrew Kyle Lampinen
Ishita Dasgupta
Stephanie C. Y. Chan
Kory Matthewson
Michael Henry Tessler
Antonia Creswell
James L. McClelland
Jane X. Wang
Felix Hill
LRM
ReLM
11
282
0
05 Apr 2022
Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language
Andy Zeng
Maria Attarian
Brian Ichter
K. Choromanski
Adrian S. Wong
...
Michael S. Ryoo
Vikas Sindhwani
Johnny Lee
Vincent Vanhoucke
Peter R. Florence
ReLM
LRM
8
567
0
01 Apr 2022
STaR: Bootstrapping Reasoning With Reasoning
E. Zelikman
Yuhuai Wu
Jesse Mu
Noah D. Goodman
ReLM
LRM
14
423
0
28 Mar 2022
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models
Archiki Prasad
Peter Hase
Xiang Zhou
Mohit Bansal
15
117
0
14 Mar 2022
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Sewon Min
Xinxi Lyu
Ari Holtzman
Mikel Artetxe
M. Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
LLMAG
LRM
16
1,388
0
25 Feb 2022
A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models
Hanqing Zhang
Haolin Song
Shaoyu Li
Ming Zhou
Dawei Song
33
213
0
14 Jan 2022
Learning To Retrieve Prompts for In-Context Learning
Ohad Rubin
Jonathan Herzig
Jonathan Berant
VPVLM
RALM
12
664
0
16 Dec 2021
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
205
1,651
0
15 Oct 2021
Systematic human learning and generalization from a brief tutorial with explanatory feedback
A. Nam
James L. McClelland
11
1
0
10 Jul 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,835
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,584
0
21 Jan 2020
Multiclass Classification Calibration Functions
Bernardo Avila-Pires
Csaba Szepesvári
46
27
0
20 Sep 2016
Previous
1
2
3
...
10
11
9