ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.02329
  4. Cited By
Can language models learn from explanations in context?
v1v2v3v4 (latest)

Can language models learn from explanations in context?

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
5 April 2022
Andrew Kyle Lampinen
Ishita Dasgupta
Stephanie C. Y. Chan
Kory Matthewson
Michael Henry Tessler
Antonia Creswell
James L. McClelland
Jane X. Wang
Felix Hill
    LRMReLM
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)

Papers citing "Can language models learn from explanations in context?"

50 / 241 papers shown
Title
Passive learning of active causal strategies in agents and language
  models
Passive learning of active causal strategies in agents and language modelsNeural Information Processing Systems (NeurIPS), 2023
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Ishita Dasgupta
A. Nam
Jane X. Wang
325
23
0
25 May 2023
BookGPT: A General Framework for Book Recommendation Empowered by Large
  Language Model
BookGPT: A General Framework for Book Recommendation Empowered by Large Language Model
Aakas Zhiyuli
YanFang Chen
Xuan Zhang
Xun Liang
ALMLLMAG
175
41
0
25 May 2023
EvEval: A Comprehensive Evaluation of Event Semantics for Large Language
  Models
EvEval: A Comprehensive Evaluation of Event Semantics for Large Language Models
Zhengwei Tao
Zhi Jin
Xiaoying Bai
Haiyan Zhao
Yanlin Feng
Jia Li
Wenpeng Hu
180
7
0
24 May 2023
In-Context Impersonation Reveals Large Language Models' Strengths and
  Biases
In-Context Impersonation Reveals Large Language Models' Strengths and BiasesNeural Information Processing Systems (NeurIPS), 2023
Leonard Salewski
Stephan Alaniz
Isabel Rio-Torto
Eric Schulz
Zeynep Akata
221
176
0
24 May 2023
Using Natural Language Explanations to Rescale Human Judgments
Using Natural Language Explanations to Rescale Human Judgments
Manya Wadhwa
Jifan Chen
Junyi Jessy Li
Greg Durrett
246
11
0
24 May 2023
A New Era in Software Security: Towards Self-Healing Software via Large
  Language Models and Formal Verification
A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal VerificationInternational Conference/Workshop on Automation of Software Test (AST), 2023
Norbert Tihanyi
Ridhi Jain
Yiannis Charalambous
M. Ferrag
Youcheng Sun
Lucas C. Cordeiro
243
75
0
24 May 2023
LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond
LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond
Philippe Laban
Wojciech Kry'sciñski
Divyansh Agarwal
Alexander R. Fabbri
Caiming Xiong
Shafiq Joty
Chien-Sheng Wu
ALMHILM
129
44
0
23 May 2023
The CoT Collection: Improving Zero-shot and Few-shot Learning of
  Language Models via Chain-of-Thought Fine-Tuning
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-TuningConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Seungone Kim
Se June Joo
Doyoung Kim
Joel Jang
Seonghyeon Ye
Jamin Shin
Minjoon Seo
ALMRALMLRM
305
146
0
23 May 2023
Make a Choice! Knowledge Base Question Answering with In-Context
  Learning
Make a Choice! Knowledge Base Question Answering with In-Context Learning
Chuanyuan Tan
Yuehe Chen
Wenbiao Shao
Wenliang Chen
82
13
0
23 May 2023
Measuring Inductive Biases of In-Context Learning with Underspecified
  Demonstrations
Measuring Inductive Biases of In-Context Learning with Underspecified DemonstrationsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Chenglei Si
Dan Friedman
Nitish Joshi
Shi Feng
Danqi Chen
He He
245
58
0
22 May 2023
CCGen: Explainable Complementary Concept Generation in E-Commerce
CCGen: Explainable Complementary Concept Generation in E-Commerce
Jie Huang
Yifan Gao
Zheng Li
Jingfeng Yang
Yangqiu Song
Chao Zhang
Zining Zhu
Haoming Jiang
Kevin Chen-Chuan Chang
Bing Yin
3DVLRM
181
7
0
19 May 2023
Post Hoc Explanations of Language Models Can Improve Language Models
Post Hoc Explanations of Language Models Can Improve Language ModelsNeural Information Processing Systems (NeurIPS), 2023
Satyapriya Krishna
Jiaqi Ma
Dylan Slack
Asma Ghandeharioun
Sameer Singh
Himabindu Lakkaraju
ReLMLRM
178
73
0
19 May 2023
Text Classification via Large Language Models
Text Classification via Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Xiaofei Sun
Xiaoya Li
Jiwei Li
Leilei Gan
Shangwei Guo
Tianwei Zhang
Guoyin Wang
RALMLRM
197
207
0
15 May 2023
Leveraging Large Language Models in Conversational Recommender Systems
Leveraging Large Language Models in Conversational Recommender Systems
Luke Friedman
Sameer Ahuja
David Allen
Zhenning Tan
Hakim Sidahmed
...
Ajay Patel
Harsh Lara
Brian Chu
Zexiang Chen
Manoj Kumar Tiwari
204
114
0
13 May 2023
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models
ZARA: Improving Few-Shot Self-Rationalization for Small Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Wei-Lin Chen
An-Zi Yen
Cheng-Kuang Wu
Hen-Hsen Huang
Hsin-Hsi Chen
ReLMLRM
187
11
0
12 May 2023
Overinformative Question Answering by Humans and Machines
Overinformative Question Answering by Humans and MachinesAnnual Meeting of the Cognitive Science Society (CogSci), 2023
Polina Tsvilodub
Michael Franke
Robert D. Hawkins
Noah D. Goodman
97
5
0
11 May 2023
Are Machine Rationales (Not) Useful to Humans? Measuring and Improving
  Human Utility of Free-Text Rationales
Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text RationalesAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Brihi Joshi
Ziyi Liu
Sahana Ramnath
Aaron Chan
Zhewei Tong
Shaoliang Nie
Qifan Wang
Yejin Choi
Xiang Ren
HAILRM
179
38
0
11 May 2023
MoT: Memory-of-Thought Enables ChatGPT to Self-Improve
MoT: Memory-of-Thought Enables ChatGPT to Self-ImproveConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Xiaonan Li
Xipeng Qiu
ReLMKELMLRMAI4MH
214
50
0
09 May 2023
Explanation-based Finetuning Makes Models More Robust to Spurious Cues
Explanation-based Finetuning Makes Models More Robust to Spurious CuesAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Josh Magnus Ludan
Yixuan Meng
Nguyen Tai
Saurabh Shah
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
AAMLLRM
228
23
0
08 May 2023
Faithful Question Answering with Monte-Carlo Planning
Faithful Question Answering with Monte-Carlo PlanningAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Ruixin Hong
Hongming Zhang
Honghui Zhao
Dong Yu
Changshui Zhang
ReLMLRM
317
24
0
04 May 2023
Visual Chain of Thought: Bridging Logical Gaps with Multimodal
  Infillings
Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings
Daniel Philip Rose
Vaishnavi Himakunthala
Andy Ouyang
Ryan He
Alex Mei
Yujie Lu
Michael Stephen Saxon
Chinmay Sonar
Diba Mirza
William Yang Wang
LRM
317
62
0
03 May 2023
Few-shot In-context Learning for Knowledge Base Question Answering
Few-shot In-context Learning for Knowledge Base Question AnsweringAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Tianle Li
Xueguang Ma
Alex Zhuang
Yu Gu
Yu-Chuan Su
Wenhu Chen
356
115
0
02 May 2023
RadAdapt: Radiology Report Summarization via Lightweight Domain
  Adaptation of Large Language Models
RadAdapt: Radiology Report Summarization via Lightweight Domain Adaptation of Large Language ModelsWorkshop on Biomedical Natural Language Processing (BioNLP), 2023
Dave Van Veen
Cara Van Uden
Maayane Attias
Anuj Pareek
Christian Blüthgen
...
Jean-Benoit Delbrouck
Juan Manuel Zambrano Chaves
C. Langlotz
Akshay S. Chaudhari
John M. Pauly
LM&MA
221
36
0
02 May 2023
Inducing anxiety in large language models increases exploration and bias
Inducing anxiety in large language models increases exploration and bias
Julian Coda-Forno
Kristin Witte
Akshay K. Jagadish
Marcel Binz
Zeynep Akata
Eric Schulz
AI4CE
148
41
0
21 Apr 2023
"What It Wants Me To Say": Bridging the Abstraction Gap Between End-User
  Programmers and Code-Generating Large Language Models
"What It Wants Me To Say": Bridging the Abstraction Gap Between End-User Programmers and Code-Generating Large Language ModelsInternational Conference on Human Factors in Computing Systems (CHI), 2023
Michael Xieyang Liu
Advait Sarkar
Carina Negreanu
B. Zorn
Jack Williams
N. Toronto
Andrew D. Gordon
212
129
0
13 Apr 2023
Why think step by step? Reasoning emerges from the locality of
  experience
Why think step by step? Reasoning emerges from the locality of experienceNeural Information Processing Systems (NeurIPS), 2023
Ben Prystawski
Michael Y. Li
Noah D. Goodman
LRMReLM
195
130
0
07 Apr 2023
Evaluating Large Language Models on a Highly-specialized Topic,
  Radiation Oncology Physics
Evaluating Large Language Models on a Highly-specialized Topic, Radiation Oncology PhysicsFrontiers in Oncology (Front Oncol), 2023
J. Holmes
Zheng Liu
Hua Zhou
Yuzhen Ding
Terence T. Sio
...
Jonathan B. Ashman
Xiang Li
Tianming Liu
Jiajian Shen
Wen Liu
LM&MAAI4CEELM
189
140
0
01 Apr 2023
Training Language Models with Language Feedback at Scale
Training Language Models with Language Feedback at Scale
Jérémy Scheurer
Jon Ander Campos
Tomasz Korbak
Jun Shern Chan
Angelica Chen
Dong Wang
Ethan Perez
ALM
245
120
0
28 Mar 2023
Improving Code Generation by Training with Natural Language Feedback
Improving Code Generation by Training with Natural Language Feedback
Angelica Chen
Jérémy Scheurer
Tomasz Korbak
Jon Ander Campos
Jun Shern Chan
Samuel R. Bowman
Kyunghyun Cho
Ethan Perez
SyDaALMAI4CE
213
89
0
28 Mar 2023
Language Model Behavior: A Comprehensive Survey
Language Model Behavior: A Comprehensive SurveyInternational Conference on Computational Logic (ICCL), 2023
Tyler A. Chang
Benjamin Bergen
VLMLRMLM&MA
300
135
0
20 Mar 2023
A Theory of Emergent In-Context Learning as Implicit Structure Induction
A Theory of Emergent In-Context Learning as Implicit Structure Induction
Michael Hahn
Navin Goyal
LRM
198
98
0
14 Mar 2023
The Learnability of In-Context Learning
The Learnability of In-Context LearningNeural Information Processing Systems (NeurIPS), 2023
Noam Wies
Yoav Levine
Amnon Shashua
280
150
0
14 Mar 2023
The Life Cycle of Knowledge in Big Language Models: A Survey
The Life Cycle of Knowledge in Big Language Models: A SurveyMachine Intelligence Research (MIR), 2023
Boxi Cao
Hongyu Lin
Xianpei Han
Le Sun
KELM
190
30
0
14 Mar 2023
Reward Design with Language Models
Reward Design with Language ModelsInternational Conference on Learning Representations (ICLR), 2023
Minae Kwon
Sang Michael Xie
Kalesha Bullard
Dorsa Sadigh
LM&Ro
330
276
0
27 Feb 2023
Automatic Prompt Augmentation and Selection with Chain-of-Thought from
  Labeled Data
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled DataConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Kashun Shum
Shizhe Diao
Tong Zhang
ReLMLRM
326
170
0
24 Feb 2023
Few-shot Multimodal Multitask Multilingual Learning
Few-shot Multimodal Multitask Multilingual Learning
Vasu Sharma
Vinija Jain
183
0
0
19 Feb 2023
Bounding the Capabilities of Large Language Models in Open Text
  Generation with Prompt Constraints
Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt ConstraintsFindings (Findings), 2023
Albert Lu
Hongxin Zhang
Yanzhe Zhang
Xuezhi Wang
Diyi Yang
LRM
117
37
0
17 Feb 2023
Position Matters! Empirical Study of Order Effect in Knowledge-grounded
  Dialogue
Position Matters! Empirical Study of Order Effect in Knowledge-grounded DialogueWorkshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc), 2023
Hsuan Su
Shachi H. Kumar
Sahisnu Mazumder
Wenda Chen
R. Manuvinakurike
Eda Okur
Saurav Sahay
L. Nachman
Shang-Tse Chen
Hung-yi Lee
100
4
0
12 Feb 2023
Explanation Selection Using Unlabeled Data for Chain-of-Thought
  Prompting
Explanation Selection Using Unlabeled Data for Chain-of-Thought PromptingConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Xi Ye
Greg Durrett
LRMReLM
150
14
0
09 Feb 2023
Language Quantized AutoEncoders: Towards Unsupervised Text-Image
  Alignment
Language Quantized AutoEncoders: Towards Unsupervised Text-Image AlignmentNeural Information Processing Systems (NeurIPS), 2023
Hao Liu
Wilson Yan
Pieter Abbeel
201
34
0
02 Feb 2023
Using In-Context Learning to Improve Dialogue Safety
Using In-Context Learning to Improve Dialogue SafetyConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Nicholas Meade
Spandana Gella
Devamanyu Hazarika
Prakhar Gupta
Di Jin
Siva Reddy
Yang Liu
Dilek Z. Hakkani-Tür
189
48
0
02 Feb 2023
Multitask Instruction-based Prompting for Fallacy Recognition
Multitask Instruction-based Prompting for Fallacy RecognitionConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Tariq Alhindi
Tuhin Chakrabarty
Elena Musi
Smaranda Muresan
LRM
140
33
0
24 Jan 2023
Are Language Models Worse than Humans at Following Prompts? It's
  Complicated
Are Language Models Worse than Humans at Following Prompts? It's ComplicatedConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Albert Webson
A. Loo
Qinan Yu
Ellie Pavlick
LRM
226
18
0
17 Jan 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELMReLM
228
228
0
16 Jan 2023
Second Thoughts are Best: Learning to Re-Align With Human Values from
  Text Edits
Second Thoughts are Best: Learning to Re-Align With Human Values from Text EditsNeural Information Processing Systems (NeurIPS), 2023
Ruibo Liu
Chenyan Jia
Ge Zhang
Ziyu Zhuang
Tony X. Liu
Soroush Vosoughi
323
39
0
01 Jan 2023
Large Language Models Encode Clinical Knowledge
Large Language Models Encode Clinical KnowledgeNature (Nature), 2022
K. Singhal
Shekoofeh Azizi
T. Tu
S. S. Mahdavi
Jason W. Wei
...
A. Rajkomar
Joelle Barral
Christopher Semturs
Alan Karthikesalingam
Vivek Natarajan
LM&MAELMAI4MH
546
3,223
0
26 Dec 2022
Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss
  Policy for Transfer Learning
Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning
Christopher T. Lengerich
Gabriel Synnaeve
Amy Zhang
Hugh Leather
Kurt Shuster
Franccois Charton
Charysse Redwood
SSLOffRL
147
1
0
21 Dec 2022
Parsel: Algorithmic Reasoning with Language Models by Composing
  Decompositions
Parsel: Algorithmic Reasoning with Language Models by Composing DecompositionsNeural Information Processing Systems (NeurIPS), 2022
E. Zelikman
Qian Huang
Gabriel Poesia
Noah D. Goodman
Nick Haber
ReLMLRM
214
70
0
20 Dec 2022
Towards Reasoning in Large Language Models: A Survey
Towards Reasoning in Large Language Models: A SurveyAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Jie Huang
Kevin Chen-Chuan Chang
LM&MAELMLRM
780
787
0
20 Dec 2022
KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales
KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales
Aaron Chan
Zhiyuan Zeng
Wyatt Lake
Brihi Joshi
Hanjie Chen
Xiang Ren
ReLMLRM
159
1
0
19 Dec 2022
Previous
12345
Next