Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1905.06316
Cited By
What do you learn from context? Probing for sentence structure in contextualized word representations
International Conference on Learning Representations (ICLR), 2019
15 May 2019
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
R. Thomas McCoy
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"What do you learn from context? Probing for sentence structure in contextualized word representations"
50 / 555 papers shown
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?
Sagnik Ray Choudhury
Nikita Bhutani
Isabelle Augenstein
300
1
0
15 Sep 2021
The Grammar-Learning Trajectories of Neural Language Models
Leshem Choshen
Guy Hacohen
D. Weinshall
Omri Abend
282
33
0
13 Sep 2021
Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Mohsen Fayyaz
Ehsan Aghazadeh
Ali Modarressi
Hosein Mohebbi
Mohammad Taher Pilehvar
171
22
0
13 Sep 2021
COMBO: State-of-the-Art Morphosyntactic Analysis
Mateusz Klimaszewski
Alina Wróblewska
AI4CE
147
6
0
11 Sep 2021
Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Shane Storks
Qiaozi Gao
Yichi Zhang
J. Chai
ReLM
LRM
253
25
0
10 Sep 2021
Beyond the Tip of the Iceberg: Assessing Coherence of Text Classifiers
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Shane Storks
J. Chai
165
7
0
10 Sep 2021
How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
S. Rajaee
Mohammad Taher Pilehvar
245
26
0
10 Sep 2021
A Bayesian Framework for Information-Theoretic Probing
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Tiago Pimentel
Robert Bamler
220
25
0
08 Sep 2021
How much pretraining data do language models need to learn syntax?
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Laura Pérez-Mayos
Miguel Ballesteros
Leo Wanner
144
37
0
07 Sep 2021
An Empirical Study on Leveraging Position Embeddings for Target-oriented Opinion Words Extraction
Samuel Mensah
Kai Sun
Nikolaos Aletras
101
19
0
02 Sep 2021
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Linyang Li
Demin Song
Xiaonan Li
Jiehang Zeng
Ruotian Ma
Xipeng Qiu
284
172
0
31 Aug 2021
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
243
5
0
31 Aug 2021
Rethinking Why Intermediate-Task Fine-Tuning Works
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Ting-Yun Chang
Chi-Jen Lu
LRM
207
32
0
26 Aug 2021
What do pre-trained code models know about code?
International Conference on Automated Software Engineering (ASE), 2021
Anjan Karmakar
Romain Robbes
ELM
160
110
0
25 Aug 2021
Post-hoc Interpretability for Neural NLP: A Survey
ACM Computing Surveys (CSUR), 2021
Andreas Madsen
Siva Reddy
A. Chandar
XAI
364
279
0
10 Aug 2021
Grounding Representation Similarity with Statistical Testing
Frances Ding
Jean-Stanislas Denain
Jacob Steinhardt
211
32
0
03 Aug 2021
Local Structure Matters Most: Perturbation Study in NLU
Findings (Findings), 2021
Louis Clouâtre
Prasanna Parthasarathi
Payel Das
Sarath Chandar
231
16
0
29 Jul 2021
Language Models as Zero-shot Visual Semantic Learners
Yue Jiao
Jonathon S. Hare
Adam Prugel-Bennett
VLM
102
1
0
26 Jul 2021
Theoretical foundations and limits of word embeddings: what types of meaning can they capture?
Sociological Methods & Research (SMR), 2021
Alina Arseniev-Koehler
197
32
0
22 Jul 2021
Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task
Ishan Tarunesh
Somak Aditya
Monojit Choudhury
139
17
0
15 Jul 2021
What do writing features tell us about AI papers?
Zining Zhu
Bai Li
Yang Xu
Frank Rudzicz
96
0
0
13 Jul 2021
A Flexible Multi-Task Model for BERT Serving
Tianwen Wei
Jianwei Qi
Shenghuang He
103
8
0
12 Jul 2021
A Survey on Data Augmentation for Text Classification
Markus Bayer
M. Kaufhold
Christian A. Reuter
460
426
0
07 Jul 2021
A Closer Look at How Fine-tuning Changes BERT
Annual Meeting of the Association for Computational Linguistics (ACL), 2021
Yichu Zhou
Vivek Srikumar
282
77
0
27 Jun 2021
Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases
Boxi Cao
Hongyu Lin
Xianpei Han
Le Sun
Lingyong Yan
M. Liao
Tong Xue
Jin Xu
187
150
0
17 Jun 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
267
113
0
17 Jun 2021
Coreference-Aware Dialogue Summarization
Zhengyuan Liu
Ke Shi
Nancy F. Chen
154
69
0
16 Jun 2021
BERT Embeddings for Automatic Readability Assessment
Recent Advances in Natural Language Processing (RANLP), 2021
Joseph Marvin Imperial
178
44
0
15 Jun 2021
Pre-Trained Models: Past, Present and Future
AI Open (AO), 2021
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
386
990
0
14 Jun 2021
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Annual Meeting of the Association for Computational Linguistics (ACL), 2021
Matthew Finlayson
Aaron Mueller
Sebastian Gehrmann
Stuart M. Shieber
Tal Linzen
Yonatan Belinkov
367
138
0
10 Jun 2021
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020
Jonas Wallat
Jaspreet Singh
Avishek Anand
CLL
KELM
327
63
0
05 Jun 2021
Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
North American Chapter of the Association for Computational Linguistics (NAACL), 2021
Yichen Jiang
Asli Celikyilmaz
P. Smolensky
Paul Soulos
Sudha Rao
Hamid Palangi
Roland Fernandez
Caitlin Smith
Joey Tianyi Zhou
Jianfeng Gao
151
21
0
02 Jun 2021
John praised Mary because he? Implicit Causality Bias and Its Interaction with Explicit Cues in LMs
Findings (Findings), 2021
Yova Kementchedjhieva
Mark Anderson
Anders Søgaard
130
14
0
02 Jun 2021
Using Integrated Gradients and Constituency Parse Trees to explain Linguistic Acceptability learnt by BERT
ICON (ICON), 2021
Anmol Nayak
Hariprasad Timmapathini
177
6
0
01 Jun 2021
Corpus-Based Paraphrase Detection Experiments and Review
T. Vrbanec
A. Meštrović
342
37
0
31 May 2021
On the Interplay Between Fine-tuning and Composition in Transformers
Findings (Findings), 2021
Lang-Chi Yu
Allyson Ettinger
230
14
0
31 May 2021
Alleviating the Knowledge-Language Inconsistency: A Study for Deep Commonsense Knowledge
IEEE/ACM Transactions on Audio Speech and Language Processing (TASLP), 2021
Yi Zhang
Lei Li
Yunfang Wu
Qi Su
Xu Sun
102
4
0
28 May 2021
Inspecting the concept knowledge graph encoded by modern language models
Findings (Findings), 2021
Carlos Aspillaga
Marcelo Mendoza
Alvaro Soto
221
15
0
27 May 2021
LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and Beyond
Artificial Intelligence (AI), 2021
Daniel Loureiro
A. Jorge
Jose Camacho-Collados
246
30
0
26 May 2021
The Low-Dimensional Linear Geometry of Contextualized Word Representations
Conference on Computational Natural Language Learning (CoNLL), 2021
Evan Hernandez
Jacob Andreas
MILM
244
54
0
15 May 2021
How Reliable are Model Diagnostics?
Findings (Findings), 2021
V. Aribandi
Yi Tay
Donald Metzler
155
19
0
12 May 2021
Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Findings (Findings), 2021
Laura Pérez-Mayos
Alba Táboas García
Simon Mille
Leo Wanner
ELM
LRM
108
10
0
10 May 2021
Bird's Eye: Probing for Linguistic Graph Structures with a Simple Information-Theoretic Approach
Annual Meeting of the Association for Computational Linguistics (ACL), 2021
Buse Giledereli
Mrinmaya Sachan
259
11
0
06 May 2021
Morph Call: Probing Morphosyntactic Content of Multilingual Transformers
Vladislav Mikhailov
O. Serikov
Ekaterina Artemova
248
10
0
26 Apr 2021
Improving BERT Pretraining with Syntactic Supervision
Georgios Tziafas
Konstantinos Kogkalidis
G. Wijnholds
M. Moortgat
179
4
0
21 Apr 2021
Enhancing Cognitive Models of Emotions with Representation Learning
Workshop on Cognitive Modeling and Computational Linguistics (CMCL), 2021
Yuting Guo
Jinho Choi
93
6
0
20 Apr 2021
Probing Across Time: What Does RoBERTa Know and When?
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Leo Z. Liu
Yizhong Wang
Jungo Kasai
Hannaneh Hajishirzi
Noah A. Smith
KELM
331
96
0
16 Apr 2021
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models
PLoS ONE (PLOS ONE), 2021
Karolina Stañczak
Sagnik Ray Choudhury
Tiago Pimentel
Robert Bamler
Isabelle Augenstein
260
26
0
15 Apr 2021
Effect of Post-processing on Contextualized Word Representations
International Conference on Computational Linguistics (COLING), 2021
Hassan Sajjad
Firoj Alam
Fahim Dalvi
Nadir Durrani
173
12
0
15 Apr 2021
An Interpretability Illusion for BERT
Tolga Bolukbasi
Adam Pearce
Ann Yuan
Andy Coenen
Emily Reif
Fernanda Viégas
Martin Wattenberg
MILM
FAtt
222
94
0
14 Apr 2021
Previous
1
2
3
...
6
7
8
...
10
11
12
Next