Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2204.02329
Cited By
v1
v2
v3
v4 (latest)
Can language models learn from explanations in context?
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
5 April 2022
Andrew Kyle Lampinen
Ishita Dasgupta
Stephanie C. Y. Chan
Kory Matthewson
Michael Henry Tessler
Antonia Creswell
James L. McClelland
Jane X. Wang
Felix Hill
LRM
ReLM
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (1 upvotes)
Papers citing
"Can language models learn from explanations in context?"
41 / 241 papers shown
Title
Foveate, Attribute, and Rationalize: Towards Physically Safe and Trustworthy AI
Annual Meeting of the Association for Computational Linguistics (ACL), 2022
Alex Mei
Sharon Levy
William Yang Wang
208
7
0
19 Dec 2022
Reasoning with Language Model Prompting: A Survey
Annual Meeting of the Association for Computational Linguistics (ACL), 2022
Shuofei Qiao
Yixin Ou
Ningyu Zhang
Xiang Chen
Yunzhi Yao
Shumin Deng
Chuanqi Tan
Fei Huang
Huajun Chen
ReLM
ELM
LRM
551
379
0
19 Dec 2022
Language model acceptability judgements are not always robust to context
Annual Meeting of the Association for Computational Linguistics (ACL), 2022
Koustuv Sinha
Jon Gauthier
Aaron Mueller
Kanishka Misra
Keren Fuentes
R. Levy
Adina Williams
185
19
0
18 Dec 2022
Reasoning Circuits: Few-shot Multihop Question Generation with Structured Rationales
Saurabh Kulshreshtha
Anna Rumshisky
ReLM
LRM
121
4
0
15 Nov 2022
Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Swarnadeep Saha
Peter Hase
Nazneen Rajani
Joey Tianyi Zhou
LRM
151
16
0
14 Nov 2022
Robosourcing Educational Resources -- Leveraging Large Language Models for Learnersourcing
Paul Denny
Sami Sarsa
Arto Hellas
Juho Leinonen
AI4Ed
101
40
0
09 Nov 2022
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
International Conference on Learning Representations (ICLR), 2022
Peifeng Wang
Aaron Chan
Filip Ilievski
Muhao Chen
Xiang Ren
LRM
ReLM
276
68
0
03 Nov 2022
Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
Computational Linguistics (CL), 2022
Andrew Kyle Lampinen
ReLM
ELM
306
44
0
27 Oct 2022
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Alexis Ross
Matthew E. Peters
Ana Marasović
LRM
218
15
0
24 Oct 2022
Large Language Models Can Self-Improve
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
519
736
0
20 Oct 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Zhaofeng Wu
IV RobertL.Logan
Pete Walsh
Akshita Bhagia
Dirk Groeneveld
Sameer Singh
Iz Beltagy
VLM
162
15
0
19 Oct 2022
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Annual Meeting of the Association for Computational Linguistics (ACL), 2022
Mirac Suzgun
Nathan Scales
Nathanael Scharli
Sebastian Gehrmann
Yi Tay
...
Aakanksha Chowdhery
Quoc V. Le
Ed H. Chi
Denny Zhou
Jason W. Wei
ALM
ELM
LRM
ReLM
458
1,491
0
17 Oct 2022
Explanations from Large Language Models Make Small Reasoners Better
Shiyang Li
Jianshu Chen
Yelong Shen
Zhiyu Zoey Chen
Xinlu Zhang
...
Jingu Qian
Baolin Peng
Yi Mao
Wenhu Chen
Xifeng Yan
ReLM
LRM
223
154
0
13 Oct 2022
Mind's Eye: Grounded Language Model Reasoning through Simulation
International Conference on Learning Representations (ICLR), 2022
Ruibo Liu
Jason W. Wei
S. Gu
Te-Yen Wu
Soroush Vosoughi
Claire Cui
Denny Zhou
Andrew M. Dai
ReLM
LRM
308
91
0
11 Oct 2022
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Seonghyeon Ye
Joel Jang
Doyoung Kim
Yongrae Jo
Minjoon Seo
VLM
212
3
0
06 Oct 2022
Ask Me Anything: A simple strategy for prompting language models
International Conference on Learning Representations (ICLR), 2022
Simran Arora
A. Narayan
Mayee F. Chen
Laurel J. Orr
Neel Guha
Kush S. Bhatia
Ines Chami
Frederic Sala
Christopher Ré
ReLM
LRM
559
252
0
05 Oct 2022
COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models
Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Kanishka Misra
Julia Taylor Rayz
Allyson Ettinger
329
16
0
05 Oct 2022
Recitation-Augmented Language Models
International Conference on Learning Representations (ICLR), 2022
Zhiqing Sun
Xuezhi Wang
Yi Tay
Yiming Yang
Denny Zhou
RALM
523
76
0
04 Oct 2022
Towards Faithful Model Explanation in NLP: A Survey
Computational Linguistics (CL), 2022
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
402
160
0
22 Sep 2022
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation
Annual Meeting of the Association for Computational Linguistics (ACL), 2022
Xingdi Yuan
Tong Wang
Yen-Hsiang Wang
Emery Fine
Rania Abdelghani
Pauline Lucas
Hélene Sauzéon
Pierre-Yves Oudeyer
228
34
0
22 Sep 2022
WeLM: A Well-Read Pre-trained Language Model for Chinese
Hui Su
Xiao Zhou
Houjin Yu
Xiaoyu Shen
Yuwen Chen
Zilin Zhu
Yang Yu
Jie Zhou
156
23
0
21 Sep 2022
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Neural Information Processing Systems (NeurIPS), 2022
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
Ashwin Kalyan
ELM
ReLM
LRM
476
1,789
0
20 Sep 2022
Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models
Annual Meeting of the Cognitive Science Society (CogSci), 2022
Ben Prystawski
P. Thibodeau
Christopher Potts
Noah D. Goodman
ReLM
LRM
AI4CE
188
22
0
16 Sep 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Neural Information Processing Systems (NeurIPS), 2022
Shivam Garg
Dimitris Tsipras
Abigail Z. Jacobs
Gregory Valiant
533
653
0
01 Aug 2022
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLM
LRM
407
210
0
14 Jul 2022
Inner Monologue: Embodied Reasoning through Planning with Language Models
Conference on Robot Learning (CoRL), 2022
Wenlong Huang
F. Xia
Ted Xiao
Harris Chan
Jacky Liang
...
Tomas Jackson
Linda Luu
Sergey Levine
Karol Hausman
Brian Ichter
LLMAG
LM&Ro
LRM
340
1,132
0
12 Jul 2022
Rationale-Augmented Ensembles in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Denny Zhou
ReLM
LRM
209
135
0
02 Jul 2022
Using cognitive psychology to understand GPT-3
Proceedings of the National Academy of Sciences of the United States of America (PNAS), 2022
Marcel Binz
Eric Schulz
ELM
LLMAG
547
599
0
21 Jun 2022
Emergent Abilities of Large Language Models
Jason W. Wei
Yi Tay
Rishi Bommasani
Colin Raffel
Barret Zoph
...
Tatsunori Hashimoto
Oriol Vinyals
Abigail Z. Jacobs
J. Dean
W. Fedus
ELM
ReLM
LRM
440
3,046
0
15 Jun 2022
Making Large Language Models Better Reasoners with Step-Aware Verifier
Annual Meeting of the Association for Computational Linguistics (ACL), 2022
Yifei Li
Zeqi Lin
Shizhuo Zhang
Qiang Fu
B. Chen
Jian-Guang Lou
Weizhu Chen
ReLM
LRM
249
281
0
06 Jun 2022
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Jaehun Jung
Lianhui Qin
Sean Welleck
Faeze Brahman
Chandra Bhagavatula
Ronan Le Bras
Yejin Choi
ReLM
LRM
420
217
0
24 May 2022
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Neural Information Processing Systems (NeurIPS), 2022
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Joey Tianyi Zhou
Colin Raffel
373
1,126
0
11 May 2022
The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning
Neural Information Processing Systems (NeurIPS), 2022
Xi Ye
Greg Durrett
ReLM
LRM
266
220
0
06 May 2022
Training Language Models with Language Feedback
Jérémy Scheurer
Jon Ander Campos
Jun Shern Chan
Angelica Chen
Dong Wang
Ethan Perez
ALM
431
54
0
29 Apr 2022
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yizhong Wang
Swaroop Mishra
Pegah Alipoormolabashi
Yeganeh Kordi
Amirreza Mirzaei
...
Chitta Baral
Yejin Choi
Noah A. Smith
Hannaneh Hajishirzi
Daniel Khashabi
ELM
527
995
0
16 Apr 2022
STaR: Bootstrapping Reasoning With Reasoning
E. Zelikman
Yuhuai Wu
Jesse Mu
Noah D. Goodman
ReLM
LRM
447
677
0
28 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Neural Information Processing Systems (NeurIPS), 2022
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
2.1K
13,906
0
28 Jan 2022
Reframing Human-AI Collaboration for Generating Free-Text Explanations
Sarah Wiegreffe
Jack Hessel
Swabha Swayamdipta
Mark O. Riedl
Yejin Choi
203
169
0
16 Dec 2021
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Albert Webson
Ellie Pavlick
LRM
300
418
0
02 Sep 2021
Systematic human learning and generalization from a brief tutorial with explanatory feedback
Open Mind (OM), 2021
A. Nam
James L. McClelland
72
3
0
10 Jul 2021
When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data
Peter Hase
Joey Tianyi Zhou
XAI
349
91
0
03 Feb 2021
Previous
1
2
3
4
5