Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2104.05240
Cited By
Factual Probing Is [MASK]: Learning vs. Learning to Recall
12 April 2021
Zexuan Zhong
Dan Friedman
Danqi Chen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Factual Probing Is [MASK]: Learning vs. Learning to Recall"
36 / 86 papers shown
Title
Rows from Many Sources: Enriching row completions from Wikidata with a pre-trained Language Model
Carina Negreanu
Alperen Karaoglu
Jack Williams
Shuang Chen
Daniel Fabian
Andrew D. Gordon
Chin-Yew Lin
RALM
AIMat
LMTD
19
2
0
14 Apr 2022
Automatic Multi-Label Prompting: Simple and Interpretable Few-Shot Classification
Han Wang
Canwen Xu
Julian McAuley
VLM
26
12
0
13 Apr 2022
Entities, Dates, and Languages: Zero-Shot on Historical Texts with T0
F. Toni
Christopher Akiki
Javier de la Rosa
Clémentine Fourrier
Enrique Manjavacas
Stefan Schweter
Daniel Alexander van Strien
31
10
0
11 Apr 2022
Unsupervised Prompt Learning for Vision-Language Models
Hao Huang
Jack Chu
Fangyun Wei
VPVLM
MLLM
VLM
31
131
0
07 Apr 2022
Exploring Visual Prompts for Adapting Large-Scale Models
Hyojin Bahng
Ali Jahanian
S. Sankaranarayanan
Phillip Isola
VLM
VPVLM
LRM
25
255
0
31 Mar 2022
How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Shaobo Li
Xiaoguang Li
Lifeng Shang
Zhenhua Dong
Chengjie Sun
Bingquan Liu
Zhenzhou Ji
Xin Jiang
Qun Liu
KELM
26
53
0
31 Mar 2022
Visual-Language Navigation Pretraining via Prompt-based Environmental Self-exploration
Xiwen Liang
Fengda Zhu
Lingling Li
Hang Xu
Xiaodan Liang
LM&Ro
VLM
28
29
0
08 Mar 2022
SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models
Liang Wang
Wei-Ye Zhao
Zhuoyu Wei
Jingming Liu
28
178
0
04 Mar 2022
Controlling the Focus of Pretrained Language Generation Models
Jiabao Ji
Yoon Kim
James R. Glass
Tianxing He
30
5
0
02 Mar 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
26
88
0
16 Feb 2022
Domain Adaptation via Prompt Learning
Chunjiang Ge
Rui Huang
Mixue Xie
Zihang Lai
Shiji Song
Shuang Li
Gao Huang
VPVLM
VLM
30
143
0
14 Feb 2022
Locating and Editing Factual Associations in GPT
Kevin Meng
David Bau
A. Andonian
Yonatan Belinkov
KELM
56
1,186
0
10 Feb 2022
What Has Been Enhanced in my Knowledge-Enhanced Language Model?
Yifan Hou
Guoji Fu
Mrinmaya Sachan
KELM
33
1
0
02 Feb 2022
Context-Tuning: Learning Contextualized Prompts for Natural Language Generation
Tianyi Tang
Junyi Li
Wayne Xin Zhao
Ji-Rong Wen
25
15
0
21 Jan 2022
Black-Box Tuning for Language-Model-as-a-Service
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
50
255
0
10 Jan 2022
Domain-Aware Continual Zero-Shot Learning
Kai Yi
Paul Janson
Wenxuan Zhang
Mohamed Elhoseiny
47
4
0
24 Dec 2021
Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases
Shrimai Prabhumoye
Rafal Kocielnik
M. Shoeybi
Anima Anandkumar
Bryan Catanzaro
33
20
0
15 Dec 2021
VT-CLIP: Enhancing Vision-Language Models with Visual-guided Texts
Longtian Qiu
Renrui Zhang
Ziyu Guo
Wei Zhang
Zilu Guo
Ziyao Zeng
Guangnan Zhang
VLM
CLIP
26
45
0
04 Dec 2021
On Transferability of Prompt Tuning for Natural Language Processing
Yusheng Su
Xiaozhi Wang
Yujia Qin
Chi-Min Chan
Yankai Lin
...
Peng Li
Juanzi Li
Lei Hou
Maosong Sun
Jie Zhou
AAML
VLM
18
98
0
12 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MA
VLM
AI4CE
71
1,029
0
01 Nov 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
805
0
14 Oct 2021
Inferring Offensiveness In Images From Natural Language Supervision
P. Schramowski
Kristian Kersting
30
2
0
08 Oct 2021
Can Language Models be Biomedical Knowledge Bases?
Mujeen Sung
Jinhyuk Lee
Sean S. Yi
Minji Jeon
Sungdong Kim
Jaewoo Kang
AI4MH
114
105
0
15 Sep 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
36
401
0
09 Sep 2021
Discrete and Soft Prompting for Multilingual Models
Mengjie Zhao
Hinrich Schütze
LRM
13
71
0
08 Sep 2021
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
330
2,267
0
02 Sep 2021
Noisy Channel Language Model Prompting for Few-Shot Text Classification
Sewon Min
Michael Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
VLM
23
218
0
09 Aug 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
31
3,828
0
28 Jul 2021
A Closer Look at How Fine-tuning Changes BERT
Yichu Zhou
Vivek Srikumar
26
63
0
27 Jun 2021
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Robert L Logan IV
Ivana Balavzević
Eric Wallace
Fabio Petroni
Sameer Singh
Sebastian Riedel
VPVLM
36
207
0
24 Jun 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
22
96
0
17 Jun 2021
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
35
51
0
12 Apr 2021
GPT Understands, Too
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Yujie Qian
Zhilin Yang
Jie Tang
VLM
43
1,144
0
18 Mar 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,918
0
31 Dec 2020
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,450
0
18 Mar 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
415
2,586
0
03 Sep 2019
Previous
1
2