ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.02329
  4. Cited By
Can language models learn from explanations in context?
v1v2v3v4 (latest)

Can language models learn from explanations in context?

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
5 April 2022
Andrew Kyle Lampinen
Ishita Dasgupta
Stephanie C. Y. Chan
Kory Matthewson
Michael Henry Tessler
Antonia Creswell
James L. McClelland
Jane X. Wang
Felix Hill
    LRMReLM
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)

Papers citing "Can language models learn from explanations in context?"

41 / 241 papers shown
Title
Foveate, Attribute, and Rationalize: Towards Physically Safe and
  Trustworthy AI
Foveate, Attribute, and Rationalize: Towards Physically Safe and Trustworthy AIAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Alex Mei
Sharon Levy
William Yang Wang
208
7
0
19 Dec 2022
Reasoning with Language Model Prompting: A Survey
Reasoning with Language Model Prompting: A SurveyAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Shuofei Qiao
Yixin Ou
Ningyu Zhang
Xiang Chen
Yunzhi Yao
Shumin Deng
Chuanqi Tan
Fei Huang
Huajun Chen
ReLMELMLRM
551
379
0
19 Dec 2022
Language model acceptability judgements are not always robust to context
Language model acceptability judgements are not always robust to contextAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Koustuv Sinha
Jon Gauthier
Aaron Mueller
Kanishka Misra
Keren Fuentes
R. Levy
Adina Williams
185
19
0
18 Dec 2022
Reasoning Circuits: Few-shot Multihop Question Generation with
  Structured Rationales
Reasoning Circuits: Few-shot Multihop Question Generation with Structured Rationales
Saurabh Kulshreshtha
Anna Rumshisky
ReLMLRM
121
4
0
15 Nov 2022
Are Hard Examples also Harder to Explain? A Study with Human and
  Model-Generated Explanations
Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated ExplanationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Swarnadeep Saha
Peter Hase
Nazneen Rajani
Joey Tianyi Zhou
LRM
151
16
0
14 Nov 2022
Robosourcing Educational Resources -- Leveraging Large Language Models
  for Learnersourcing
Robosourcing Educational Resources -- Leveraging Large Language Models for Learnersourcing
Paul Denny
Sami Sarsa
Arto Hellas
Juho Leinonen
AI4Ed
101
40
0
09 Nov 2022
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
PINTO: Faithful Language Reasoning Using Prompt-Generated RationalesInternational Conference on Learning Representations (ICLR), 2022
Peifeng Wang
Aaron Chan
Filip Ilievski
Muhao Chen
Xiang Ren
LRMReLM
276
68
0
03 Nov 2022
Can language models handle recursively nested grammatical structures? A
  case study on comparing models and humans
Can language models handle recursively nested grammatical structures? A case study on comparing models and humansComputational Linguistics (CL), 2022
Andrew Kyle Lampinen
ReLMELM
306
44
0
27 Oct 2022
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Does Self-Rationalization Improve Robustness to Spurious Correlations?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Alexis Ross
Matthew E. Peters
Ana Marasović
LRM
218
15
0
24 Oct 2022
Large Language Models Can Self-Improve
Large Language Models Can Self-ImproveConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLMAI4MHLRM
519
736
0
20 Oct 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability
Continued Pretraining for Better Zero- and Few-Shot PromptabilityConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Zhaofeng Wu
IV RobertL.Logan
Pete Walsh
Akshita Bhagia
Dirk Groeneveld
Sameer Singh
Iz Beltagy
VLM
162
15
0
19 Oct 2022
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve ThemAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Mirac Suzgun
Nathan Scales
Nathanael Scharli
Sebastian Gehrmann
Yi Tay
...
Aakanksha Chowdhery
Quoc V. Le
Ed H. Chi
Denny Zhou
Jason W. Wei
ALMELMLRMReLM
458
1,491
0
17 Oct 2022
Explanations from Large Language Models Make Small Reasoners Better
Explanations from Large Language Models Make Small Reasoners Better
Shiyang Li
Jianshu Chen
Yelong Shen
Zhiyu Zoey Chen
Xinlu Zhang
...
Jingu Qian
Baolin Peng
Yi Mao
Wenhu Chen
Xifeng Yan
ReLMLRM
223
154
0
13 Oct 2022
Mind's Eye: Grounded Language Model Reasoning through Simulation
Mind's Eye: Grounded Language Model Reasoning through SimulationInternational Conference on Learning Representations (ICLR), 2022
Ruibo Liu
Jason W. Wei
S. Gu
Te-Yen Wu
Soroush Vosoughi
Claire Cui
Denny Zhou
Andrew M. Dai
ReLMLRM
308
91
0
11 Oct 2022
Efficiently Enhancing Zero-Shot Performance of Instruction Following
  Model via Retrieval of Soft Prompt
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft PromptConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Seonghyeon Ye
Joel Jang
Doyoung Kim
Yongrae Jo
Minjoon Seo
VLM
212
3
0
06 Oct 2022
Ask Me Anything: A simple strategy for prompting language models
Ask Me Anything: A simple strategy for prompting language modelsInternational Conference on Learning Representations (ICLR), 2022
Simran Arora
A. Narayan
Mayee F. Chen
Laurel J. Orr
Neel Guha
Kush S. Bhatia
Ines Chami
Frederic Sala
Christopher Ré
ReLMLRM
559
252
0
05 Oct 2022
COMPS: Conceptual Minimal Pair Sentences for testing Robust Property
  Knowledge and its Inheritance in Pre-trained Language Models
COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Kanishka Misra
Julia Taylor Rayz
Allyson Ettinger
329
16
0
05 Oct 2022
Recitation-Augmented Language Models
Recitation-Augmented Language ModelsInternational Conference on Learning Representations (ICLR), 2022
Zhiqing Sun
Xuezhi Wang
Yi Tay
Yiming Yang
Denny Zhou
RALM
523
76
0
04 Oct 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A SurveyComputational Linguistics (CL), 2022
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
402
160
0
22 Sep 2022
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question
  Generation
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question GenerationAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Xingdi Yuan
Tong Wang
Yen-Hsiang Wang
Emery Fine
Rania Abdelghani
Pauline Lucas
Hélene Sauzéon
Pierre-Yves Oudeyer
228
34
0
22 Sep 2022
WeLM: A Well-Read Pre-trained Language Model for Chinese
WeLM: A Well-Read Pre-trained Language Model for Chinese
Hui Su
Xiao Zhou
Houjin Yu
Xiaoyu Shen
Yuwen Chen
Zilin Zhu
Yang Yu
Jie Zhou
156
23
0
21 Sep 2022
Learn to Explain: Multimodal Reasoning via Thought Chains for Science
  Question Answering
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question AnsweringNeural Information Processing Systems (NeurIPS), 2022
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
Ashwin Kalyan
ELMReLMLRM
476
1,789
0
20 Sep 2022
Psychologically-informed chain-of-thought prompts for metaphor
  understanding in large language models
Psychologically-informed chain-of-thought prompts for metaphor understanding in large language modelsAnnual Meeting of the Cognitive Science Society (CogSci), 2022
Ben Prystawski
P. Thibodeau
Christopher Potts
Noah D. Goodman
ReLMLRMAI4CE
188
22
0
16 Sep 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function ClassesNeural Information Processing Systems (NeurIPS), 2022
Shivam Garg
Dimitris Tsipras
Abigail Z. Jacobs
Gregory Valiant
533
653
0
01 Aug 2022
Language models show human-like content effects on reasoning tasks
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLMLRM
407
210
0
14 Jul 2022
Inner Monologue: Embodied Reasoning through Planning with Language
  Models
Inner Monologue: Embodied Reasoning through Planning with Language ModelsConference on Robot Learning (CoRL), 2022
Wenlong Huang
F. Xia
Ted Xiao
Harris Chan
Jacky Liang
...
Tomas Jackson
Linda Luu
Sergey Levine
Karol Hausman
Brian Ichter
LLMAGLM&RoLRM
340
1,132
0
12 Jul 2022
Rationale-Augmented Ensembles in Language Models
Rationale-Augmented Ensembles in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Denny Zhou
ReLMLRM
209
135
0
02 Jul 2022
Using cognitive psychology to understand GPT-3
Using cognitive psychology to understand GPT-3Proceedings of the National Academy of Sciences of the United States of America (PNAS), 2022
Marcel Binz
Eric Schulz
ELMLLMAG
547
599
0
21 Jun 2022
Emergent Abilities of Large Language Models
Emergent Abilities of Large Language Models
Jason W. Wei
Yi Tay
Rishi Bommasani
Colin Raffel
Barret Zoph
...
Tatsunori Hashimoto
Oriol Vinyals
Abigail Z. Jacobs
J. Dean
W. Fedus
ELMReLMLRM
440
3,046
0
15 Jun 2022
Making Large Language Models Better Reasoners with Step-Aware Verifier
Making Large Language Models Better Reasoners with Step-Aware VerifierAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Yifei Li
Zeqi Lin
Shizhuo Zhang
Qiang Fu
B. Chen
Jian-Guang Lou
Weizhu Chen
ReLMLRM
249
281
0
06 Jun 2022
Maieutic Prompting: Logically Consistent Reasoning with Recursive
  Explanations
Maieutic Prompting: Logically Consistent Reasoning with Recursive ExplanationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Jaehun Jung
Lianhui Qin
Sean Welleck
Faeze Brahman
Chandra Bhagavatula
Ronan Le Bras
Yejin Choi
ReLMLRM
420
217
0
24 May 2022
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
  In-Context Learning
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context LearningNeural Information Processing Systems (NeurIPS), 2022
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Joey Tianyi Zhou
Colin Raffel
373
1,126
0
11 May 2022
The Unreliability of Explanations in Few-shot Prompting for Textual
  Reasoning
The Unreliability of Explanations in Few-shot Prompting for Textual ReasoningNeural Information Processing Systems (NeurIPS), 2022
Xi Ye
Greg Durrett
ReLMLRM
266
220
0
06 May 2022
Training Language Models with Language Feedback
Training Language Models with Language Feedback
Jérémy Scheurer
Jon Ander Campos
Jun Shern Chan
Angelica Chen
Dong Wang
Ethan Perez
ALM
431
54
0
29 Apr 2022
Super-NaturalInstructions: Generalization via Declarative Instructions
  on 1600+ NLP Tasks
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP TasksConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yizhong Wang
Swaroop Mishra
Pegah Alipoormolabashi
Yeganeh Kordi
Amirreza Mirzaei
...
Chitta Baral
Yejin Choi
Noah A. Smith
Hannaneh Hajishirzi
Daniel Khashabi
ELM
527
995
0
16 Apr 2022
STaR: Bootstrapping Reasoning With Reasoning
STaR: Bootstrapping Reasoning With Reasoning
E. Zelikman
Yuhuai Wu
Jesse Mu
Noah D. Goodman
ReLMLRM
447
677
0
28 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language ModelsNeural Information Processing Systems (NeurIPS), 2022
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&RoLRMAI4CEReLM
2.1K
13,906
0
28 Jan 2022
Reframing Human-AI Collaboration for Generating Free-Text Explanations
Reframing Human-AI Collaboration for Generating Free-Text Explanations
Sarah Wiegreffe
Jack Hessel
Swabha Swayamdipta
Mark O. Riedl
Yejin Choi
203
169
0
16 Dec 2021
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Albert Webson
Ellie Pavlick
LRM
300
418
0
02 Sep 2021
Systematic human learning and generalization from a brief tutorial with
  explanatory feedback
Systematic human learning and generalization from a brief tutorial with explanatory feedbackOpen Mind (OM), 2021
A. Nam
James L. McClelland
72
3
0
10 Jul 2021
When Can Models Learn From Explanations? A Formal Framework for
  Understanding the Roles of Explanation Data
When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data
Peter Hase
Joey Tianyi Zhou
XAI
349
91
0
03 Feb 2021
Previous
12345