Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.05327
Cited By
BERTese: Learning to Speak to BERT
9 March 2021
Adi Haviv
Jonathan Berant
Amir Globerson
Re-assign community
ArXiv
PDF
HTML
Papers citing
"BERTese: Learning to Speak to BERT"
30 / 80 papers shown
Title
Prompting Large Language Models With the Socratic Method
Edward Y. Chang
LRM
ELM
45
48
0
17 Feb 2023
Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models
Zhiqiu Lin
Samuel Yu
Zhiyi Kuang
Deepak Pathak
Deva Ramana
VLM
15
100
0
16 Jan 2023
Optimizing Prompts for Text-to-Image Generation
Y. Hao
Zewen Chi
Li Dong
Furu Wei
27
139
0
19 Dec 2022
SPE: Symmetrical Prompt Enhancement for Fact Probing
Yiyuan Li
Tong Che
Yezhen Wang
Zhengbao Jiang
Caiming Xiong
Snigdha Chaturvedi
26
6
0
14 Nov 2022
Communication breakdown: On the low mutual intelligibility between human and neural captioning
Roberto Dessì
Eleonora Gualdoni
Francesca Franzon
Gemma Boleda
Marco Baroni
VLM
24
6
0
20 Oct 2022
Query Rewriting for Effective Misinformation Discovery
Ashkan Kazemi
Artem Abzaliev
Naihao Deng
Rui Hou
Scott A. Hale
Verónica Pérez-Rosas
Rada Mihalcea
KELM
32
2
0
14 Oct 2022
MetaPrompting: Learning to Learn Better Prompts
Yutai Hou
Hongyuan Dong
Xinghao Wang
Bohan Li
Wanxiang Che
VLM
21
27
0
23 Sep 2022
Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models
Zichun Yu
Tianyu Gao
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Maosong Sun
Jie Zhou
VLM
LRM
23
1
0
20 Sep 2022
Prompting as Probing: Using Language Models for Knowledge Base Construction
Dimitrios Alivanistos
Selene Báez Santamaría
Michael Cochez
Jan-Christoph Kalo
Emile van Krieken
Thiviyan Thanapalasingam
KELM
17
45
0
23 Aug 2022
No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence
Chaozheng Wang
Yuanhang Yang
Cuiyun Gao
Yun Peng
Hongyu Zhang
Michael R. Lyu
AAML
59
134
0
24 Jul 2022
Probing via Prompting
Jiaoda Li
Ryan Cotterell
Mrinmaya Sachan
29
13
0
04 Jul 2022
Learning a Better Initialization for Soft Prompts via Meta-Learning
Yukun Huang
Kun Qian
Zhou Yu
VLM
36
9
0
25 May 2022
Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding
Rishabh Bhardwaj
Amrita Saha
S. Hoi
Soujanya Poria
VLM
VPVLM
17
7
0
23 May 2022
Few-Shot Natural Language Inference Generation with PDD: Prompt and Dynamic Demonstration
Kaijian Li
Shansan Gong
Kenny Q. Zhu
19
0
0
21 May 2022
Probing Simile Knowledge from Pre-trained Language Models
Weijie Chen
Yongzhu Chang
Rongsheng Zhang
Jiashu Pu
Guandan Chen
Le Zhang
Yadong Xi
Yijiang Chen
Chang Su
16
11
0
27 Apr 2022
Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View
Boxi Cao
Hongyu Lin
Xianpei Han
Fangchao Liu
Le Sun
ELM
AAML
20
41
0
23 Mar 2022
Pre-trained Token-replaced Detection Model as Few-shot Learner
Zicheng Li
Shoushan Li
Guodong Zhou
30
8
0
07 Mar 2022
Black-box Prompt Learning for Pre-trained Language Models
Shizhe Diao
Zhichao Huang
Ruijia Xu
Xuechun Li
Yong Lin
Xiao Zhou
Tong Zhang
VLM
AAML
28
68
0
21 Jan 2022
Learning To Retrieve Prompts for In-Context Learning
Ohad Rubin
Jonathan Herzig
Jonathan Berant
VPVLM
RALM
14
665
0
16 Dec 2021
Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation
Tianyi Liu
Zuxuan Wu
Wenhan Xiong
Jingjing Chen
Yu-Gang Jiang
VLM
MLLM
32
10
0
10 Dec 2021
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MA
VLM
AI4CE
69
1,029
0
01 Nov 2021
Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER
Dong-Ho Lee
Akshen Kadakia
Kangmin Tan
Mahak Agarwal
Xinyu Feng
Takashi Shibuya
Ryosuke Mitani
Toshiyuki Sekiya
Jay Pujara
Xiang Ren
35
84
0
16 Oct 2021
Distilling Relation Embeddings from Pre-trained Language Models
Asahi Ushio
Jose Camacho-Collados
Steven Schockaert
19
21
0
21 Sep 2021
Continuous Entailment Patterns for Lexical Inference in Context
Martin Schmitt
Hinrich Schütze
31
3
0
08 Sep 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
23
3,828
0
28 Jul 2021
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Guanghui Qin
J. Eisner
7
532
0
14 Apr 2021
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
30
51
0
12 Apr 2021
Factual Probing Is [MASK]: Learning vs. Learning to Recall
Zexuan Zhong
Dan Friedman
Danqi Chen
6
403
0
12 Apr 2021
PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains
Eyal Ben-David
Nadav Oved
Roi Reichart
VLM
OOD
14
88
0
24 Feb 2021
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
408
2,584
0
03 Sep 2019
Previous
1
2