ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.15943
  4. Cited By
MetaICL: Learning to Learn In Context
v1v2 (latest)

MetaICL: Learning to Learn In Context

North American Chapter of the Association for Computational Linguistics (NAACL), 2021
29 October 2021
Sewon Min
M. Lewis
Luke Zettlemoyer
Hannaneh Hajishirzi
    LRM
ArXiv (abs)PDFHTML

Papers citing "MetaICL: Learning to Learn In Context"

50 / 388 papers shown
Title
SINC: Self-Supervised In-Context Learning for Vision-Language Tasks
SINC: Self-Supervised In-Context Learning for Vision-Language TasksIEEE International Conference on Computer Vision (ICCV), 2023
Yi-Syuan Chen
Yun-Zhu Song
Cheng Yu Yeo
Bei Liu
Jianlong Fu
Hong-Han Shuai
VLMLRM
211
7
0
15 Jul 2023
Scaling In-Context Demonstrations with Structured Attention
Scaling In-Context Demonstrations with Structured Attention
Tianle Cai
Kaixuan Huang
Jason D. Lee
Mengdi Wang
LRM
142
9
0
05 Jul 2023
Meta-training with Demonstration Retrieval for Efficient Few-shot
  Learning
Meta-training with Demonstration Retrieval for Efficient Few-shot LearningAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Aaron Mueller
Kanika Narang
Lambert Mathias
Qifan Wang
Hamed Firooz
RALM
199
3
0
30 Jun 2023
On the Exploitability of Instruction Tuning
On the Exploitability of Instruction TuningNeural Information Processing Systems (NeurIPS), 2023
Manli Shu
Zhenghao Hu
Chen Zhu
Jonas Geiping
Chaowei Xiao
Tom Goldstein
SILM
365
127
0
28 Jun 2023
Pretraining task diversity and the emergence of non-Bayesian in-context
  learning for regression
Pretraining task diversity and the emergence of non-Bayesian in-context learning for regressionNeural Information Processing Systems (NeurIPS), 2023
Allan Raventós
Mansheej Paul
F. Chen
Surya Ganguli
275
126
0
26 Jun 2023
Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think"
  Step-by-Step
Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-StepAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Liunian Harold Li
Jack Hessel
Youngjae Yu
Xiang Ren
Kai-Wei Chang
Yejin Choi
LRMAI4CEReLM
221
193
0
24 Jun 2023
Differentiable Instruction Optimization for Cross-Task Generalization
Differentiable Instruction Optimization for Cross-Task GeneralizationAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Masaru Isonuma
Junichiro Mori
Ichiro Sakata
194
0
0
16 Jun 2023
Schema-learning and rebinding as mechanisms of in-context learning and
  emergence
Schema-learning and rebinding as mechanisms of in-context learning and emergenceNeural Information Processing Systems (NeurIPS), 2023
Siva K. Swaminathan
Antoine Dedieu
Rajkumar Vasudeva Raju
Murray Shanahan
Miguel Lazaro-Gredilla
Dileep George
207
22
0
16 Jun 2023
Empowering Molecule Discovery for Molecule-Caption Translation with
  Large Language Models: A ChatGPT Perspective
Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT PerspectiveIEEE Transactions on Knowledge and Data Engineering (TKDE), 2023
Jiatong Li
Yunqing Liu
Wenqi Fan
Xiao Wei
Hui Liu
Shucheng Zhou
Qing Li
151
126
0
11 Jun 2023
Boosting Language Models Reasoning with Chain-of-Knowledge Prompting
Boosting Language Models Reasoning with Chain-of-Knowledge PromptingAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Jiadong Wang
Qiushi Sun
Xiang Li
Ming Gao
ReLMLRM
263
102
0
10 Jun 2023
Aladdin: Zero-Shot Hallucination of Stylized 3D Assets from Abstract
  Scene Descriptions
Aladdin: Zero-Shot Hallucination of Stylized 3D Assets from Abstract Scene Descriptions
Ian Huang
Vrishab Krishna
Omoruyi E. Atekha
Leonidas Guibas
DiffMVGen
210
11
0
09 Jun 2023
In-Context Learning through the Bayesian Prism
In-Context Learning through the Bayesian PrismInternational Conference on Learning Representations (ICLR), 2023
Madhuri Panwar
Kabir Ahuja
Navin Goyal
BDL
278
68
0
08 Jun 2023
MetaVL: Transferring In-Context Learning Ability From Language Models to
  Vision-Language Models
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Masoud Monajatipoor
Liunian Harold Li
Mozhdeh Rouhsedaghat
Lin F. Yang
Kai-Wei Chang
MLLMLRM
145
15
0
02 Jun 2023
Transformers learn to implement preconditioned gradient descent for
  in-context learning
Transformers learn to implement preconditioned gradient descent for in-context learningNeural Information Processing Systems (NeurIPS), 2023
Kwangjun Ahn
Xiang Cheng
Hadi Daneshmand
S. Sra
ODL
315
234
0
01 Jun 2023
Improving CLIP Training with Language Rewrites
Improving CLIP Training with Language RewritesNeural Information Processing Systems (NeurIPS), 2023
Lijie Fan
Dilip Krishnan
Phillip Isola
Dina Katabi
Yonglong Tian
BDLVLMCLIP
392
244
0
31 May 2023
IDAS: Intent Discovery with Abstractive Summarization
IDAS: Intent Discovery with Abstractive Summarization
Maarten De Raedt
Fréderic Godin
Thomas Demeester
Chris Develder
247
29
0
31 May 2023
What and How does In-Context Learning Learn? Bayesian Model Averaging,
  Parameterization, and Generalization
What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and GeneralizationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Yufeng Zhang
Fengzhuo Zhang
Zhuoran Yang
Zhaoran Wang
BDL
301
92
0
30 May 2023
Dissecting Chain-of-Thought: Compositionality through In-Context
  Filtering and Learning
Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning
Yingcong Li
Kartik K. Sreenivasan
Angeliki Giannou
Dimitris Papailiopoulos
Samet Oymak
LRM
211
21
0
30 May 2023
Im-Promptu: In-Context Composition from Image Prompts
Im-Promptu: In-Context Composition from Image PromptsNeural Information Processing Systems (NeurIPS), 2023
Bhishma Dedhia
Michael Chang
Jake C. Snell
Thomas Griffiths
N. Jha
LRMMLLM
321
4
0
26 May 2023
Large Language Models Can be Lazy Learners: Analyze Shortcuts in
  In-Context Learning
Large Language Models Can be Lazy Learners: Analyze Shortcuts in In-Context LearningAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Ruixiang Tang
Dehan Kong
Lo-li Huang
Hui Xue
264
75
0
26 May 2023
Scaling Data-Constrained Language Models
Scaling Data-Constrained Language ModelsNeural Information Processing Systems (NeurIPS), 2023
Niklas Muennighoff
Alexander M. Rush
Boaz Barak
Teven Le Scao
Aleksandra Piktus
Nouamane Tazi
S. Pyysalo
Thomas Wolf
Colin Raffel
ALM
617
318
0
25 May 2023
PURR: Efficiently Editing Language Model Hallucinations by Denoising
  Language Model Corruptions
PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
Anthony Chen
Panupong Pasupat
Sameer Singh
Hongrae Lee
Kelvin Guu
280
56
0
24 May 2023
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual
  Transfer
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual TransferNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Akari Asai
Sneha Kudugunta
Xinyan Velocity Yu
Terra Blevins
Hila Gonen
Machel Reid
Yulia Tsvetkov
Sebastian Ruder
Hannaneh Hajishirzi
291
80
0
24 May 2023
Getting MoRE out of Mixture of Language Model Reasoning Experts
Getting MoRE out of Mixture of Language Model Reasoning ExpertsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Chenglei Si
Weijia Shi
Chen Zhao
Luke Zettlemoyer
Jordan L. Boyd-Graber
LRM
221
42
0
24 May 2023
Meta-Tuning LLMs to Leverage Lexical Knowledge for Generalizable
  Language Style Understanding
Meta-Tuning LLMs to Leverage Lexical Knowledge for Generalizable Language Style UnderstandingAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Ruohao Guo
Wei Xu
Alan Ritter
200
7
0
24 May 2023
QLoRA: Efficient Finetuning of Quantized LLMs
QLoRA: Efficient Finetuning of Quantized LLMsNeural Information Processing Systems (NeurIPS), 2023
Tim Dettmers
Artidoro Pagnoni
Ari Holtzman
Luke Zettlemoyer
ALM
539
3,586
0
23 May 2023
The CoT Collection: Improving Zero-shot and Few-shot Learning of
  Language Models via Chain-of-Thought Fine-Tuning
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-TuningConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Seungone Kim
Se June Joo
Doyoung Kim
Joel Jang
Seonghyeon Ye
Jamin Shin
Minjoon Seo
ALMRALMLRM
321
150
0
23 May 2023
When Does Aggregating Multiple Skills with Multi-Task Learning Work? A
  Case Study in Financial NLP
When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLPAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Jingwei Ni
Zhijing Jin
Qian Wang
Mrinmaya Sachan
Markus Leippold
AIFin
170
7
0
23 May 2023
Concept-aware Training Improves In-context Learning Ability of Language
  Models
Concept-aware Training Improves In-context Learning Ability of Language Models
Michal Štefánik
Marek Kadlcík
KELMLRM
194
0
0
23 May 2023
Continual Dialogue State Tracking via Example-Guided Question Answering
Continual Dialogue State Tracking via Example-Guided Question AnsweringConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Hyundong Justin Cho
Andrea Madotto
Mohammad Kachuee
Khyathi Chandu
Satwik Kottur
Jing Xu
Jonathan May
Chinnadhurai Sankar
CLL
173
5
0
23 May 2023
Enhance Reasoning Ability of Visual-Language Models via Large Language
  Models
Enhance Reasoning Ability of Visual-Language Models via Large Language Models
Yueting Yang
Xintong Zhang
Wenjuan Han
VLMReLMLRM
169
2
0
22 May 2023
TaskWeb: Selecting Better Source Tasks for Multi-task NLP
TaskWeb: Selecting Better Source Tasks for Multi-task NLPConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Joongwon Kim
Akari Asai
Gabriel Ilharco
Hannaneh Hajishirzi
248
12
0
22 May 2023
Improved Compositional Generalization by Generating Demonstrations for
  Meta-Learning
Improved Compositional Generalization by Generating Demonstrations for Meta-LearningConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Sam Spilsbury
Alexander Ilin
212
1
0
22 May 2023
Meta-in-context learning in large language models
Meta-in-context learning in large language modelsNeural Information Processing Systems (NeurIPS), 2023
Julian Coda-Forno
Marcel Binz
Zeynep Akata
M. Botvinick
Jane X. Wang
Eric Schulz
LRM
408
57
0
22 May 2023
RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by
  Reversing Chain-of-Thought
RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought
Tianci Xue
Ziqi Wang
Zhenhailong Wang
Chi Han
Pengfei Yu
Heng Ji
KELMLRM
207
48
0
19 May 2023
Federated Foundation Models: Privacy-Preserving and Collaborative
  Learning for Large Models
Federated Foundation Models: Privacy-Preserving and Collaborative Learning for Large ModelsInternational Conference on Language Resources and Evaluation (LREC), 2023
Sixing Yu
J. P. Muñoz
Ali Jannesari
AI4CE
226
64
0
19 May 2023
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot
  Relation Extractors
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation ExtractorsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Kai Zhang
Bernal Jiménez Gutiérrez
Yu-Chuan Su
150
111
0
18 May 2023
Learning In-context Learning for Named Entity Recognition
Learning In-context Learning for Named Entity RecognitionAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Jiawei Chen
Yaojie Lu
Hongyu Lin
Jie Lou
Wei Jia
Dai Dai
Hua Wu
Boxi Cao
Xianpei Han
Le Sun
NAI
242
27
0
18 May 2023
Pre-Training to Learn in Context
Pre-Training to Learn in ContextAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Yuxian Gu
Li Dong
Furu Wei
Shiyu Huang
CLIPLRMReLM
328
54
0
16 May 2023
Small Models are Valuable Plug-ins for Large Language Models
Small Models are Valuable Plug-ins for Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Canwen Xu
Yichong Xu
Shuohang Wang
Yang Liu
Chenguang Zhu
Julian McAuley
LLMAG
186
71
0
15 May 2023
Symbol tuning improves in-context learning in language models
Symbol tuning improves in-context learning in language modelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Jerry W. Wei
Le Hou
Andrew Kyle Lampinen
Xiangning Chen
Da Huang
...
Xinyun Chen
Yifeng Lu
Denny Zhou
Tengyu Ma
Quoc V. Le
LRM
301
101
0
15 May 2023
Synergistic Interplay between Search and Large Language Models for
  Information Retrieval
Synergistic Interplay between Search and Large Language Models for Information RetrievalAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Jiazhan Feng
Chongyang Tao
Xiubo Geng
Tao Shen
Can Xu
Guodong Long
Dongyan Zhao
Daxin Jiang
KELM
247
19
0
12 May 2023
Say What You Mean! Large Language Models Speak Too Positively about
  Negative Commonsense Knowledge
Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense KnowledgeAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Jiangjie Chen
Wei Shi
Ziquan Fu
Sijie Cheng
Lei Li
Yanghua Xiao
228
57
0
10 May 2023
Revisiting Relation Extraction in the era of Large Language Models
Revisiting Relation Extraction in the era of Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Somin Wadhwa
Silvio Amir
Byron C. Wallace
ReLMKELMLRM
254
195
0
08 May 2023
Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner
Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful LearnerNeural Information Processing Systems (NeurIPS), 2023
Zhengxiang Shi
Aldo Lipani
VLMCLL
359
31
0
02 May 2023
Working Memory Capacity of ChatGPT: An Empirical Study
Working Memory Capacity of ChatGPT: An Empirical StudyAAAI Conference on Artificial Intelligence (AAAI), 2023
Dongyu Gong
Xingchen Wan
Dingmin Wang
LLMAGKELMAI4MH
152
25
0
30 Apr 2023
Controlled Text Generation with Natural Language Instructions
Controlled Text Generation with Natural Language InstructionsInternational Conference on Machine Learning (ICML), 2023
Wangchunshu Zhou
Yuchen Eleanor Jiang
Ethan Gotlieb Wilcox
Robert Bamler
Mrinmaya Sachan
368
113
0
27 Apr 2023
TABLET: Learning From Instructions For Tabular Data
TABLET: Learning From Instructions For Tabular Data
Dylan Slack
Sameer Singh
LMTDALMRALM
208
27
0
25 Apr 2023
From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction
  Tuning
From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning
Qian Liu
Fan Zhou
Zhengbao Jiang
Longxu Dou
Min Lin
187
17
0
17 Apr 2023
Chinese Open Instruction Generalist: A Preliminary Release
Chinese Open Instruction Generalist: A Preliminary Release
Ge Zhang
Yemin Shi
Ruibo Liu
Ruibin Yuan
Yi Zhou
...
Zhaoqun Li
Zekun Wang
Chenghua Lin
Wen-Fen Huang
Jie Fu
ALM
239
40
0
17 Apr 2023
Previous
12345678
Next