Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.06316
Cited By
What do you learn from context? Probing for sentence structure in contextualized word representations
15 May 2019
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
R. Thomas McCoy
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What do you learn from context? Probing for sentence structure in contextualized word representations"
50 / 532 papers shown
Title
ISCAS at SemEval-2020 Task 5: Pre-trained Transformers for Counterfactual Statement Modeling
Yaojie Lu
Annan Li
Hongyu Lin
Xianpei Han
Le Sun
6
5
0
17 Sep 2020
End-to-End Neural Event Coreference Resolution
Yaojie Lu
Hongyu Lin
Jialong Tang
Xianpei Han
Le Sun
22
34
0
17 Sep 2020
Retrofitting Structure-aware Transformer Language Model for End Tasks
Hao Fei
Yafeng Ren
Donghong Ji
9
45
0
16 Sep 2020
An information theoretic view on selecting linguistic probes
Zining Zhu
Frank Rudzicz
12
19
0
15 Sep 2020
Investigating Gender Bias in BERT
Rishabh Bhardwaj
Navonil Majumder
Soujanya Poria
25
106
0
10 Sep 2020
Duluth at SemEval-2020 Task 7: Using Surprise as a Key to Unlock Humorous Headlines
Shuning Jin
Yue Yin
XianE Tang
Ted Pedersen
17
2
0
06 Sep 2020
Visually Analyzing Contextualized Embeddings
M. Berger
6
13
0
05 Sep 2020
Analysis and Evaluation of Language Models for Word Sense Disambiguation
Daniel Loureiro
Kiamehr Rezaee
Mohammad Taher Pilehvar
Jose Camacho-Collados
8
13
0
26 Aug 2020
Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation
Goran Glavas
Ivan Vulić
27
67
0
15 Aug 2020
On Commonsense Cues in BERT for Solving Commonsense Tasks
Leyang Cui
Sijie Cheng
Yu Wu
Yue Zhang
SSL
CML
LRM
26
14
0
10 Aug 2020
Evaluating German Transformer Language Models with Syntactic Agreement Tests
Karolina Zaczynska
Nils Feldhus
Robert Schwarzenberg
Aleksandra Gabryszak
Sebastian Möller
6
5
0
07 Jul 2020
Knowledge-Aware Language Model Pretraining
Corby Rosset
Chenyan Xiong
M. Phan
Xia Song
Paul N. Bennett
Saurabh Tiwary
KELM
19
79
0
29 Jun 2020
How to Probe Sentence Embeddings in Low-Resource Languages: On Structural Design Choices for Probing Task Evaluation
Steffen Eger
Johannes Daxenberger
Iryna Gurevych
17
11
0
16 Jun 2020
Revisiting Few-sample BERT Fine-tuning
Tianyi Zhang
Felix Wu
Arzoo Katiyar
Kilian Q. Weinberger
Yoav Artzi
30
441
0
10 Jun 2020
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
W. Guo
Aylin Caliskan
6
233
0
06 Jun 2020
A Cross-Task Analysis of Text Span Representations
Shubham Toshniwal
Freda Shi
Bowen Shi
Lingyu Gao
Karen Livescu
Kevin Gimpel
9
35
0
06 Jun 2020
Understanding Self-Attention of Self-Supervised Audio Transformers
Shu-Wen Yang
Andy T. Liu
Hung-yi Lee
6
27
0
05 Jun 2020
CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language Learning
Alessandro Suglia
Ioannis Konstas
Andrea Vanzo
E. Bastianelli
Desmond Elliott
Stella Frank
Oliver Lemon
24
16
0
03 Jun 2020
A Pairwise Probe for Understanding BERT Fine-Tuning on Machine Reading Comprehension
Jie Cai
Zhengzhou Zhu
Ping Nie
Qian Liu
AAML
8
7
0
02 Jun 2020
Emergence of Separable Manifolds in Deep Language Representations
Jonathan Mamou
Hang Le
Miguel Angel del Rio
Cory Stephenson
Hanlin Tang
Yoon Kim
SueYeon Chung
AAML
AI4CE
6
38
0
01 Jun 2020
Probing Emergent Semantics in Predictive Agents via Question Answering
Abhishek Das
Federico Carnevale
Hamza Merzic
Laura Rimell
R. Schneider
...
Alden Hung
Arun Ahuja
S. Clark
Greg Wayne
Felix Hill
22
18
0
01 Jun 2020
Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals
Yanai Elazar
Shauli Ravfogel
Alon Jacovi
Yoav Goldberg
12
25
0
01 Jun 2020
Syntactic Structure Distillation Pretraining For Bidirectional Encoders
A. Kuncoro
Lingpeng Kong
Daniel Fried
Dani Yogatama
Laura Rimell
Chris Dyer
Phil Blunsom
31
33
0
27 May 2020
Finding Experts in Transformer Models
Xavier Suau
Luca Zappella
N. Apostoloff
13
31
0
15 May 2020
Behind the Scene: Revealing the Secrets of Pre-trained Vision-and-Language Models
Jize Cao
Zhe Gan
Yu Cheng
Licheng Yu
Yen-Chun Chen
Jingjing Liu
VLM
14
127
0
15 May 2020
On the Robustness of Language Encoders against Grammatical Errors
Fan Yin
Quanyu Long
Tao Meng
Kai-Wei Chang
31
33
0
12 May 2020
How Context Affects Language Models' Factual Predictions
Fabio Petroni
Patrick Lewis
Aleksandra Piktus
Tim Rocktaschel
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
6
228
0
10 May 2020
Finding Universal Grammatical Relations in Multilingual BERT
Ethan A. Chi
John Hewitt
Christopher D. Manning
11
151
0
09 May 2020
Weakly-Supervised Neural Response Selection from an Ensemble of Task-Specialised Dialogue Agents
Asir Saeed
Khai Mai
Pham Quang Nhat Minh
Nguyen Tuan Duc
Danushka Bollegala
6
0
0
06 May 2020
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
40
42
0
04 May 2020
Emergence of Syntax Needs Minimal Supervision
Raphaël Bailly
Kata Gábor
18
5
0
03 May 2020
Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?
Abhilasha Ravichander
Yonatan Belinkov
Eduard H. Hovy
18
123
0
02 May 2020
Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Yada Pruksachatkun
Jason Phang
Haokun Liu
Phu Mon Htut
Xiaoyi Zhang
Richard Yuanzhe Pang
Clara Vania
Katharina Kann
Samuel R. Bowman
CLL
LRM
6
194
0
01 May 2020
Probing Contextual Language Models for Common Ground with Visual Representations
Gabriel Ilharco
Rowan Zellers
Ali Farhadi
Hannaneh Hajishirzi
22
14
0
01 May 2020
Selecting Informative Contexts Improves Language Model Finetuning
Richard Antonello
Nicole M. Beckage
Javier S. Turek
Alexander G. Huth
10
10
0
01 May 2020
Does Data Augmentation Improve Generalization in NLP?
Rohan Jha
Charles Lovering
Ellie Pavlick
17
10
0
30 Apr 2020
A Matter of Framing: The Impact of Linguistic Formalism on Probing Results
Ilia Kuznetsov
Iryna Gurevych
4
26
0
30 Apr 2020
How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking
Nicola De Cao
M. Schlichtkrull
Wilker Aziz
Ivan Titov
17
89
0
30 Apr 2020
Investigating Transferability in Pretrained Language Models
Alex Tamkin
Trisha Singh
D. Giovanardi
Noah D. Goodman
MILM
30
48
0
30 Apr 2020
Asking without Telling: Exploring Latent Ontologies in Contextual Representations
Julian Michael
Jan A. Botha
Ian Tenney
12
42
0
29 Apr 2020
What Happens To BERT Embeddings During Fine-tuning?
Amil Merchant
Elahe Rahimtoroghi
Ellie Pavlick
Ian Tenney
4
176
0
29 Apr 2020
Do Neural Language Models Show Preferences for Syntactic Formalisms?
Artur Kulmizev
Vinit Ravishankar
Mostafa Abdou
Joakim Nivre
MILM
6
43
0
29 Apr 2020
Quantifying the Contextualization of Word Representations with Semantic Class Probing
Mengjie Zhao
Philipp Dufter
Yadollah Yaghoobzadeh
Hinrich Schütze
12
27
0
25 Apr 2020
Contextualized Representations Using Textual Encyclopedic Knowledge
Mandar Joshi
Kenton Lee
Yi Luan
Kristina Toutanova
15
30
0
24 Apr 2020
Syntactic Data Augmentation Increases Robustness to Inference Heuristics
Junghyun Min
R. Thomas McCoy
Dipanjan Das
Emily Pitler
Tal Linzen
28
175
0
24 Apr 2020
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms
Goro Kobayashi
Tatsuki Kuribayashi
Sho Yokoi
Kentaro Inui
14
15
0
21 Apr 2020
Exploring the Combination of Contextual Word Embeddings and Knowledge Graph Embeddings
Lea Dieudonat
Kelvin Han
Phyllicia Leavitt
Esteban Marquer
12
3
0
17 Apr 2020
TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue
Chien-Sheng Wu
S. Hoi
R. Socher
Caiming Xiong
17
319
0
15 Apr 2020
What's so special about BERT's layers? A closer look at the NLP pipeline in monolingual and multilingual models
Wietse de Vries
Andreas van Cranenburgh
Malvina Nissim
MILM
SSeg
MoE
6
64
0
14 Apr 2020
Cross-Lingual Semantic Role Labeling with High-Quality Translated Training Corpus
Hao Fei
Meishan Zhang
Donghong Ji
8
106
0
14 Apr 2020
Previous
1
2
3
...
10
11
8
9
Next