Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2212.06950
Cited By
Pre-trained Language Models Can be Fully Zero-Shot Learners
14 December 2022
Xuandong Zhao
Siqi Ouyang
Zhiguo Yu
Ming-li Wu
Lei Li
VLM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pre-trained Language Models Can be Fully Zero-Shot Learners"
9 / 9 papers shown
Title
T3: A Novel Zero-shot Transfer Learning Framework Iteratively Training on an Assistant Task for a Target Task
Xindi Tong
Yujin Zhu
Shijian Fan
Liang Xu
38
1
0
26 Sep 2024
TELEClass: Taxonomy Enrichment and LLM-Enhanced Hierarchical Text Classification with Minimal Supervision
Yunyi Zhang
Ruozhen Yang
Xueqiang Xu
Rui Li
Jinfeng Xiao
Jiaming Shen
Jiawei Han
30
9
0
29 Feb 2024
Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines
Hamed Damirchi
Cristian Rodriguez-Opazo
Ehsan Abbasnejad
Damien Teney
Javen Qinfeng Shi
Stephen Gould
A. Hengel
VLM
9
0
0
29 Nov 2023
Beyond Prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering Representations
Yu Fei
Ping Nie
Zhao Meng
Roger Wattenhofer
Mrinmaya Sachan
VLM
27
20
0
29 Oct 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,261
0
28 Jan 2022
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
238
1,898
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
248
1,382
0
21 Jan 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
393
2,216
0
03 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1