ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.04106
  4. Cited By
Noisy Channel Language Model Prompting for Few-Shot Text Classification

Noisy Channel Language Model Prompting for Few-Shot Text Classification

9 August 2021
Sewon Min
Michael Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
    VLM
ArXivPDFHTML

Papers citing "Noisy Channel Language Model Prompting for Few-Shot Text Classification"

50 / 50 papers shown
Title
Rethinking Invariance in In-context Learning
Rethinking Invariance in In-context Learning
Lizhe Fang
Yifei Wang
Khashayar Gatmiry
Lei Fang
Y. Wang
48
2
0
08 May 2025
E-InMeMo: Enhanced Prompting for Visual In-Context Learning
E-InMeMo: Enhanced Prompting for Visual In-Context Learning
Jiahao Zhang
Bowen Wang
Hong Liu
Liangzhi Li
Yuta Nakashima
Hajime Nagahara
VLM
101
0
0
25 Apr 2025
Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization
Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization
Xinhao Yao
Hongjin Qian
Xiaolin Hu
Gengze Xu
Wei Liu
Jian Luan
B. Wang
Y. Liu
48
0
0
03 Oct 2024
The Solution for Language-Enhanced Image New Category Discovery
The Solution for Language-Enhanced Image New Category Discovery
Haonan Xu
Dian Chao
Xiangyu Wu
Zhonghua Wan
Yang Yang
VLM
35
0
0
06 Jul 2024
From Introspection to Best Practices: Principled Analysis of Demonstrations in Multimodal In-Context Learning
From Introspection to Best Practices: Principled Analysis of Demonstrations in Multimodal In-Context Learning
Nan Xu
Fei Wang
Sheng Zhang
Hoifung Poon
Muhao Chen
32
6
0
01 Jul 2024
Token-based Decision Criteria Are Suboptimal in In-context Learning
Token-based Decision Criteria Are Suboptimal in In-context Learning
Hakaze Cho
Yoshihiro Sakai
Mariko Kato
Kenshiro Tanaka
Akira Ishii
Naoya Inoue
40
2
0
24 Jun 2024
Implicit In-context Learning
Implicit In-context Learning
Zhuowei Li
Zihao Xu
Ligong Han
Yunhe Gao
Song Wen
Di Liu
Hao Wang
Dimitris N. Metaxas
38
1
0
23 May 2024
MAML-en-LLM: Model Agnostic Meta-Training of LLMs for Improved
  In-Context Learning
MAML-en-LLM: Model Agnostic Meta-Training of LLMs for Improved In-Context Learning
Sanchit Sinha
Yuguang Yue
Victor Soto
Mayank Kulkarni
Jianhua Lu
Aidong Zhang
LRM
34
4
0
19 May 2024
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of
  Language Models
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models
Kang He
Yinghan Long
Kaushik Roy
21
2
0
15 Feb 2024
Explain-then-Translate: An Analysis on Improving Program Translation
  with Self-generated Explanations
Explain-then-Translate: An Analysis on Improving Program Translation with Self-generated Explanations
Zilu Tang
Mayank Agarwal
Alex Shypula
Bailin Wang
Derry Wijaya
Jie Chen
Yoon Kim
LRM
28
15
0
13 Nov 2023
In-Context Learning for Few-Shot Molecular Property Prediction
In-Context Learning for Few-Shot Molecular Property Prediction
Christopher Fifty
J. Leskovec
Sebastian Thrun
34
5
0
13 Oct 2023
Fine-tune Language Models to Approximate Unbiased In-context Learning
Fine-tune Language Models to Approximate Unbiased In-context Learning
Timothy Chu
Zhao-quan Song
Chiwun Yang
22
15
0
05 Oct 2023
GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
Yonggan Fu
Yongan Zhang
Zhongzhi Yu
Sixu Li
Zhifan Ye
Chaojian Li
Cheng Wan
Ying Lin
35
60
0
19 Sep 2023
A Multi-Task Semantic Decomposition Framework with Task-specific
  Pre-training for Few-Shot NER
A Multi-Task Semantic Decomposition Framework with Task-specific Pre-training for Few-Shot NER
Guanting Dong
Zechen Wang
Jinxu Zhao
Gang Zhao
Daichi Guo
...
Keqing He
Xuefeng Li
Liwen Wang
Xinyue Cui
Weiran Xu
32
19
0
28 Aug 2023
Differentiable Instruction Optimization for Cross-Task Generalization
Differentiable Instruction Optimization for Cross-Task Generalization
Masaru Isonuma
Junichiro Mori
Ichiro Sakata
23
0
0
16 Jun 2023
IDAS: Intent Discovery with Abstractive Summarization
IDAS: Intent Discovery with Abstractive Summarization
Maarten De Raedt
Fréderic Godin
Thomas Demeester
Chris Develder
30
16
0
31 May 2023
PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase
  Generation
PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase Generation
Yixin Wan
Kuan-Hao Huang
Kai-Wei Chang
26
7
0
26 May 2023
Self-ICL: Zero-Shot In-Context Learning with Self-Generated
  Demonstrations
Self-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations
Wei-Lin Chen
Cheng-Kuang Wu
Yun-Nung Chen
Hsin-Hsi Chen
13
27
0
24 May 2023
Label Words are Anchors: An Information Flow Perspective for
  Understanding In-Context Learning
Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning
Lean Wang
Lei Li
Damai Dai
Deli Chen
Hao Zhou
Fandong Meng
Jie Zhou
Xu Sun
22
165
0
23 May 2023
How Does In-Context Learning Help Prompt Tuning?
How Does In-Context Learning Help Prompt Tuning?
Simeng Sun
Yang Liu
Dan Iter
Chenguang Zhu
Mohit Iyyer
VLM
18
17
0
22 Feb 2023
In-context Example Selection with Influences
In-context Example Selection with Influences
Nguyen Tai
Eric Wong
9
48
0
21 Feb 2023
Towards Few-Shot Identification of Morality Frames using In-Context
  Learning
Towards Few-Shot Identification of Morality Frames using In-Context Learning
Shamik Roy
Nishanth Nakshatri
Dan Goldwasser
23
11
0
03 Feb 2023
In-context Learning Distillation: Transferring Few-shot Learning Ability
  of Pre-trained Language Models
In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models
Yukun Huang
Yanda Chen
Zhou Yu
Kathleen McKeown
11
30
0
20 Dec 2022
Coder Reviewer Reranking for Code Generation
Coder Reviewer Reranking for Code Generation
Tianyi Zhang
Tao Yu
Tatsunori B. Hashimoto
M. Lewis
Wen-tau Yih
Daniel Fried
Sida I. Wang
28
92
0
29 Nov 2022
MACSum: Controllable Summarization with Mixed Attributes
MACSum: Controllable Summarization with Mixed Attributes
Yusen Zhang
Yang Liu
Ziyi Yang
Yuwei Fang
Yulong Chen
Dragomir R. Radev
Chenguang Zhu
Michael Zeng
Rui Zhang
26
15
0
09 Nov 2022
Tuning Language Models as Training Data Generators for
  Augmentation-Enhanced Few-Shot Learning
Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning
Yu Meng
Martin Michalski
Jiaxin Huang
Yu Zhang
Tarek F. Abdelzaher
Jiawei Han
VLM
41
46
0
06 Nov 2022
Controllable Factuality in Document-Grounded Dialog Systems Using a
  Noisy Channel Model
Controllable Factuality in Document-Grounded Dialog Systems Using a Noisy Channel Model
Nico Daheim
David Thulke
Christian Dugast
Hermann Ney
HILM
19
4
0
31 Oct 2022
Robustness of Demonstration-based Learning Under Limited Data Scenario
Robustness of Demonstration-based Learning Under Limited Data Scenario
Hongxin Zhang
Yanzhe Zhang
Ruiyi Zhang
Diyi Yang
30
13
0
19 Oct 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability
Continued Pretraining for Better Zero- and Few-Shot Promptability
Zhaofeng Wu
IV RobertL.Logan
Pete Walsh
Akshita Bhagia
Dirk Groeneveld
Sameer Singh
Iz Beltagy
VLM
25
12
0
19 Oct 2022
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of
  In-Context Experts
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of In-Context Experts
Nghia T. Le
Fan Bai
Alan Ritter
29
12
0
07 Oct 2022
Automatic Chain of Thought Prompting in Large Language Models
Automatic Chain of Thought Prompting in Large Language Models
Zhuosheng Zhang
Aston Zhang
Mu Li
Alexander J. Smola
ReLM
LRM
45
573
0
07 Oct 2022
Guess the Instruction! Flipped Learning Makes Language Models Stronger
  Zero-Shot Learners
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye
Doyoung Kim
Joel Jang
Joongbo Shin
Minjoon Seo
FedML
VLM
UQCV
LRM
11
25
0
06 Oct 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
21
447
0
01 Aug 2022
Emergent Abilities of Large Language Models
Emergent Abilities of Large Language Models
Jason W. Wei
Yi Tay
Rishi Bommasani
Colin Raffel
Barret Zoph
...
Tatsunori Hashimoto
Oriol Vinyals
Percy Liang
J. Dean
W. Fedus
ELM
ReLM
LRM
43
2,333
0
15 Jun 2022
Offline RL for Natural Language Generation with Implicit Language Q
  Learning
Offline RL for Natural Language Generation with Implicit Language Q Learning
Charles Burton Snell
Ilya Kostrikov
Yi Su
Mengjiao Yang
Sergey Levine
OffRL
121
101
0
05 Jun 2022
kNN-Prompt: Nearest Neighbor Zero-Shot Inference
kNN-Prompt: Nearest Neighbor Zero-Shot Inference
Weijia Shi
Julian Michael
Suchin Gururangan
Luke Zettlemoyer
RALM
VLM
13
32
0
27 May 2022
Gradient-Based Constrained Sampling from Language Models
Gradient-Based Constrained Sampling from Language Models
Sachin Kumar
Biswajit Paria
Yulia Tsvetkov
BDL
28
53
0
25 May 2022
Natural Language to Code Translation with Execution
Natural Language to Code Translation with Execution
Freda Shi
Daniel Fried
Marjan Ghazvininejad
Luke Zettlemoyer
Sida I. Wang
26
123
0
25 Apr 2022
PERFECT: Prompt-free and Efficient Few-shot Learning with Language
  Models
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Rabeeh Karimi Mahabadi
Luke Zettlemoyer
James Henderson
Marzieh Saeidi
Lambert Mathias
Ves Stoyanov
Majid Yazdani
VLM
21
69
0
03 Apr 2022
Rethinking the Role of Demonstrations: What Makes In-Context Learning
  Work?
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Sewon Min
Xinxi Lyu
Ari Holtzman
Mikel Artetxe
M. Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
LLMAG
LRM
16
1,392
0
25 Feb 2022
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
Yanchen Liu
Timo Schick
Hinrich Schütze
VLM
25
15
0
12 Feb 2022
AdaPrompt: Adaptive Model Training for Prompt-based NLP
AdaPrompt: Adaptive Model Training for Prompt-based NLP
Yulong Chen
Yang Liu
Li Dong
Shuohang Wang
Chenguang Zhu
Michael Zeng
Yue Zhang
VLM
8
45
0
10 Feb 2022
Describing Differences between Text Distributions with Natural Language
Describing Differences between Text Distributions with Natural Language
Ruiqi Zhong
Charles Burton Snell
Dan Klein
Jacob Steinhardt
VLM
124
42
0
28 Jan 2022
MetaICL: Learning to Learn In Context
MetaICL: Learning to Learn In Context
Sewon Min
M. Lewis
Luke Zettlemoyer
Hannaneh Hajishirzi
LRM
37
466
0
29 Oct 2021
Coherence boosting: When your pretrained language model is not paying
  enough attention
Coherence boosting: When your pretrained language model is not paying enough attention
Nikolay Malkin
Zhen Wang
Nebojsa Jojic
RALM
19
35
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
805
0
14 Oct 2021
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
277
1,117
0
18 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,843
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,916
0
31 Dec 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
281
11,677
0
09 Mar 2017
1