ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.02329
  4. Cited By
Can language models learn from explanations in context?

Can language models learn from explanations in context?

5 April 2022
Andrew Kyle Lampinen
Ishita Dasgupta
Stephanie C. Y. Chan
Kory Matthewson
Michael Henry Tessler
Antonia Creswell
James L. McClelland
Jane X. Wang
Felix Hill
    LRM
    ReLM
ArXivPDFHTML

Papers citing "Can language models learn from explanations in context?"

50 / 216 papers shown
Title
The Learnability of In-Context Learning
The Learnability of In-Context Learning
Noam Wies
Yoav Levine
Amnon Shashua
117
91
0
14 Mar 2023
The Life Cycle of Knowledge in Big Language Models: A Survey
The Life Cycle of Knowledge in Big Language Models: A Survey
Boxi Cao
Hongyu Lin
Xianpei Han
Le Sun
KELM
26
27
0
14 Mar 2023
Reward Design with Language Models
Reward Design with Language Models
Minae Kwon
Sang Michael Xie
Kalesha Bullard
Dorsa Sadigh
LM&Ro
25
198
0
27 Feb 2023
Automatic Prompt Augmentation and Selection with Chain-of-Thought from
  Labeled Data
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data
Kashun Shum
Shizhe Diao
Tong Zhang
ReLM
LRM
26
127
0
24 Feb 2023
Few-shot Multimodal Multitask Multilingual Learning
Few-shot Multimodal Multitask Multilingual Learning
Aman Chadha
Vinija Jain
34
0
0
19 Feb 2023
Bounding the Capabilities of Large Language Models in Open Text
  Generation with Prompt Constraints
Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints
Albert Lu
Hongxin Zhang
Yanzhe Zhang
Xuezhi Wang
Diyi Yang
LRM
11
28
0
17 Feb 2023
Position Matters! Empirical Study of Order Effect in Knowledge-grounded
  Dialogue
Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue
Hsuan Su
Shachi H. Kumar
Sahisnu Mazumder
Wenda Chen
R. Manuvinakurike
Eda Okur
Saurav Sahay
L. Nachman
Shang-Tse Chen
Hung-yi Lee
23
3
0
12 Feb 2023
Explanation Selection Using Unlabeled Data for Chain-of-Thought
  Prompting
Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting
Xi Ye
Greg Durrett
LRM
ReLM
18
12
0
09 Feb 2023
Language Quantized AutoEncoders: Towards Unsupervised Text-Image
  Alignment
Language Quantized AutoEncoders: Towards Unsupervised Text-Image Alignment
Hao Liu
Wilson Yan
Pieter Abbeel
26
24
0
02 Feb 2023
Using In-Context Learning to Improve Dialogue Safety
Using In-Context Learning to Improve Dialogue Safety
Nicholas Meade
Spandana Gella
Devamanyu Hazarika
Prakhar Gupta
Di Jin
Siva Reddy
Yang Liu
Dilek Z. Hakkani-Tür
25
37
0
02 Feb 2023
Multitask Instruction-based Prompting for Fallacy Recognition
Multitask Instruction-based Prompting for Fallacy Recognition
Tariq Alhindi
Tuhin Chakrabarty
Elena Musi
Smaranda Muresan
LRM
10
30
0
24 Jan 2023
Are Language Models Worse than Humans at Following Prompts? It's
  Complicated
Are Language Models Worse than Humans at Following Prompts? It's Complicated
Albert Webson
A. Loo
Qinan Yu
Ellie Pavlick
LRM
11
16
0
17 Jan 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
23
208
0
16 Jan 2023
Second Thoughts are Best: Learning to Re-Align With Human Values from
  Text Edits
Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits
Ruibo Liu
Chenyan Jia
Ge Zhang
Ziyu Zhuang
Tony X. Liu
Soroush Vosoughi
78
34
0
01 Jan 2023
Large Language Models Encode Clinical Knowledge
Large Language Models Encode Clinical Knowledge
K. Singhal
Shekoofeh Azizi
T. Tu
S. S. Mahdavi
Jason W. Wei
...
A. Rajkomar
Joelle Barral
Christopher Semturs
Alan Karthikesalingam
Vivek Natarajan
LM&MA
ELM
AI4MH
19
2,154
0
26 Dec 2022
Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss
  Policy for Transfer Learning
Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning
Christopher T. Lengerich
Gabriel Synnaeve
Amy Zhang
Hugh Leather
Kurt Shuster
Franccois Charton
Charysse Redwood
SSL
OffRL
14
1
0
21 Dec 2022
Parsel: Algorithmic Reasoning with Language Models by Composing
  Decompositions
Parsel: Algorithmic Reasoning with Language Models by Composing Decompositions
E. Zelikman
Qian Huang
Gabriel Poesia
Noah D. Goodman
Nick Haber
ReLM
LRM
19
53
0
20 Dec 2022
Towards Reasoning in Large Language Models: A Survey
Towards Reasoning in Large Language Models: A Survey
Jie Huang
Kevin Chen-Chuan Chang
LM&MA
ELM
LRM
19
579
0
20 Dec 2022
KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales
KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales
Aaron Chan
Zhiyuan Zeng
Wyatt Lake
Brihi Joshi
Hanjie Chen
Xiang Ren
ReLM
LRM
15
1
0
19 Dec 2022
Foveate, Attribute, and Rationalize: Towards Physically Safe and
  Trustworthy AI
Foveate, Attribute, and Rationalize: Towards Physically Safe and Trustworthy AI
Alex Mei
Sharon Levy
William Yang Wang
39
7
0
19 Dec 2022
Reasoning with Language Model Prompting: A Survey
Reasoning with Language Model Prompting: A Survey
Shuofei Qiao
Yixin Ou
Ningyu Zhang
Xiang Chen
Yunzhi Yao
Shumin Deng
Chuanqi Tan
Fei Huang
Huajun Chen
ReLM
ELM
LRM
49
307
0
19 Dec 2022
Language model acceptability judgements are not always robust to context
Language model acceptability judgements are not always robust to context
Koustuv Sinha
Jon Gauthier
Aaron Mueller
Kanishka Misra
Keren Fuentes
R. Levy
Adina Williams
11
17
0
18 Dec 2022
Reasoning Circuits: Few-shot Multihop Question Generation with
  Structured Rationales
Reasoning Circuits: Few-shot Multihop Question Generation with Structured Rationales
Saurabh Kulshreshtha
Anna Rumshisky
ReLM
LRM
17
2
0
15 Nov 2022
Are Hard Examples also Harder to Explain? A Study with Human and
  Model-Generated Explanations
Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations
Swarnadeep Saha
Peter Hase
Nazneen Rajani
Mohit Bansal
LRM
18
14
0
14 Nov 2022
Robosourcing Educational Resources -- Leveraging Large Language Models
  for Learnersourcing
Robosourcing Educational Resources -- Leveraging Large Language Models for Learnersourcing
Paul Denny
Sami Sarsa
Arto Hellas
Juho Leinonen
AI4Ed
6
26
0
09 Nov 2022
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
Peifeng Wang
Aaron Chan
Filip Ilievski
Muhao Chen
Xiang Ren
LRM
ReLM
13
59
0
03 Nov 2022
Can language models handle recursively nested grammatical structures? A
  case study on comparing models and humans
Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
Andrew Kyle Lampinen
ReLM
ELM
25
36
0
27 Oct 2022
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Alexis Ross
Matthew E. Peters
Ana Marasović
LRM
19
9
0
24 Oct 2022
Large Language Models Can Self-Improve
Large Language Models Can Self-Improve
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
13
559
0
20 Oct 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability
Continued Pretraining for Better Zero- and Few-Shot Promptability
Zhaofeng Wu
IV RobertL.Logan
Pete Walsh
Akshita Bhagia
Dirk Groeneveld
Sameer Singh
Iz Beltagy
VLM
21
12
0
19 Oct 2022
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Mirac Suzgun
Nathan Scales
Nathanael Scharli
Sebastian Gehrmann
Yi Tay
...
Aakanksha Chowdhery
Quoc V. Le
Ed H. Chi
Denny Zhou
Jason W. Wei
ALM
ELM
LRM
ReLM
69
988
0
17 Oct 2022
Explanations from Large Language Models Make Small Reasoners Better
Explanations from Large Language Models Make Small Reasoners Better
Shiyang Li
Jianshu Chen
Yelong Shen
Zhiyu Zoey Chen
Xinlu Zhang
...
Jingu Qian
Baolin Peng
Yi Mao
Wenhu Chen
Xifeng Yan
ReLM
LRM
33
129
0
13 Oct 2022
Mind's Eye: Grounded Language Model Reasoning through Simulation
Mind's Eye: Grounded Language Model Reasoning through Simulation
Ruibo Liu
Jason W. Wei
S. Gu
Te-Yen Wu
Soroush Vosoughi
Claire Cui
Denny Zhou
Andrew M. Dai
ReLM
LRM
111
79
0
11 Oct 2022
Efficiently Enhancing Zero-Shot Performance of Instruction Following
  Model via Retrieval of Soft Prompt
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Seonghyeon Ye
Joel Jang
Doyoung Kim
Yongrae Jo
Minjoon Seo
VLM
29
2
0
06 Oct 2022
Ask Me Anything: A simple strategy for prompting language models
Ask Me Anything: A simple strategy for prompting language models
Simran Arora
A. Narayan
Mayee F. Chen
Laurel J. Orr
Neel Guha
Kush S. Bhatia
Ines Chami
Frederic Sala
Christopher Ré
ReLM
LRM
206
205
0
05 Oct 2022
COMPS: Conceptual Minimal Pair Sentences for testing Robust Property
  Knowledge and its Inheritance in Pre-trained Language Models
COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models
Kanishka Misra
Julia Taylor Rayz
Allyson Ettinger
25
10
0
05 Oct 2022
Recitation-Augmented Language Models
Recitation-Augmented Language Models
Zhiqing Sun
Xuezhi Wang
Yi Tay
Yiming Yang
Denny Zhou
RALM
192
60
0
04 Oct 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
104
107
0
22 Sep 2022
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question
  Generation
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation
Xingdi Yuan
Tong Wang
Yen-Hsiang Wang
Emery Fine
Rania Abdelghani
Pauline Lucas
Hélene Sauzéon
Pierre-Yves Oudeyer
25
28
0
22 Sep 2022
WeLM: A Well-Read Pre-trained Language Model for Chinese
WeLM: A Well-Read Pre-trained Language Model for Chinese
Hui Su
Xiao Zhou
Houjin Yu
Xiaoyu Shen
Yuwen Chen
Zilin Zhu
Yang Yu
Jie Zhou
24
23
0
21 Sep 2022
Learn to Explain: Multimodal Reasoning via Thought Chains for Science
  Question Answering
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
A. Kalyan
ELM
ReLM
LRM
209
1,101
0
20 Sep 2022
Psychologically-informed chain-of-thought prompts for metaphor
  understanding in large language models
Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models
Ben Prystawski
P. Thibodeau
Christopher Potts
Noah D. Goodman
ReLM
LRM
AI4CE
32
20
0
16 Sep 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
19
447
0
01 Aug 2022
Language models show human-like content effects on reasoning tasks
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLM
LRM
20
177
0
14 Jul 2022
Inner Monologue: Embodied Reasoning through Planning with Language
  Models
Inner Monologue: Embodied Reasoning through Planning with Language Models
Wenlong Huang
F. Xia
Ted Xiao
Harris Chan
Jacky Liang
...
Tomas Jackson
Linda Luu
Sergey Levine
Karol Hausman
Brian Ichter
LLMAG
LM&Ro
LRM
39
850
0
12 Jul 2022
Rationale-Augmented Ensembles in Language Models
Rationale-Augmented Ensembles in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Denny Zhou
ReLM
LRM
10
124
0
02 Jul 2022
Using cognitive psychology to understand GPT-3
Using cognitive psychology to understand GPT-3
Marcel Binz
Eric Schulz
ELM
LLMAG
242
434
0
21 Jun 2022
Emergent Abilities of Large Language Models
Emergent Abilities of Large Language Models
Jason W. Wei
Yi Tay
Rishi Bommasani
Colin Raffel
Barret Zoph
...
Tatsunori Hashimoto
Oriol Vinyals
Percy Liang
J. Dean
W. Fedus
ELM
ReLM
LRM
43
2,328
0
15 Jun 2022
Making Large Language Models Better Reasoners with Step-Aware Verifier
Making Large Language Models Better Reasoners with Step-Aware Verifier
Yifei Li
Zeqi Lin
Shizhuo Zhang
Qiang Fu
B. Chen
Jian-Guang Lou
Weizhu Chen
ReLM
LRM
28
205
0
06 Jun 2022
Maieutic Prompting: Logically Consistent Reasoning with Recursive
  Explanations
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Jaehun Jung
Lianhui Qin
Sean Welleck
Faeze Brahman
Chandra Bhagavatula
Ronan Le Bras
Yejin Choi
ReLM
LRM
218
189
0
24 May 2022
Previous
12345
Next