Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.06316
Cited By
What do you learn from context? Probing for sentence structure in contextualized word representations
15 May 2019
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
R. Thomas McCoy
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What do you learn from context? Probing for sentence structure in contextualized word representations"
50 / 532 papers shown
Title
CLiMP: A Benchmark for Chinese Language Model Evaluation
Beilei Xiang
Changbing Yang
Yu Li
Alex Warstadt
Katharina Kann
ALM
15
38
0
26 Jan 2021
The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Madhura Pande
Aakriti Budhraja
Preksha Nema
Pratyush Kumar
Mitesh M. Khapra
23
19
0
22 Jan 2021
Of Non-Linearity and Commutativity in BERT
Sumu Zhao
Damian Pascual
Gino Brunner
Roger Wattenhofer
20
16
0
12 Jan 2021
Learning Better Sentence Representation with Syntax Information
Chen Yang
20
1
0
09 Jan 2021
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation
Kushal Lakhotia
Bhargavi Paranjape
Asish Ghoshal
Wen-tau Yih
Yashar Mehdad
Srini Iyer
17
27
0
31 Dec 2020
Inserting Information Bottlenecks for Attribution in Transformers
Zhiying Jiang
Raphael Tang
Ji Xin
Jimmy J. Lin
30
6
0
27 Dec 2020
Pre-Training a Language Model Without Human Language
Cheng-Han Chiang
Hung-yi Lee
15
13
0
22 Dec 2020
Learning from Mistakes: Using Mis-predictions as Harm Alerts in Language Pre-Training
Chen Xing
Wenhao Liu
Caiming Xiong
17
0
0
16 Dec 2020
Infusing Finetuning with Semantic Dependencies
Zhaofeng Wu
Hao Peng
Noah A. Smith
17
36
0
10 Dec 2020
Circles are like Ellipses, or Ellipses are like Circles? Measuring the Degree of Asymmetry of Static and Contextual Embeddings and the Implications to Representation Learning
Wei Zhang
Murray Campbell
Yang Yu
Sadhana Kumaravel
6
0
0
03 Dec 2020
Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis
Michael A. Lepori
R. Thomas McCoy
14
23
0
24 Nov 2020
FLERT: Document-Level Features for Named Entity Recognition
Stefan Schweter
A. Akbik
6
110
0
13 Nov 2020
When Do You Need Billions of Words of Pretraining Data?
Yian Zhang
Alex Warstadt
Haau-Sing Li
Samuel R. Bowman
21
136
0
10 Nov 2020
Language Through a Prism: A Spectral Approach for Multiscale Language Representations
Alex Tamkin
Dan Jurafsky
Noah D. Goodman
16
42
0
09 Nov 2020
Positional Artefacts Propagate Through Masked Language Model Embeddings
Ziyang Luo
Artur Kulmizev
Xiaoxi Mao
22
41
0
09 Nov 2020
CxGBERT: BERT meets Construction Grammar
Harish Tayyar Madabushi
Laurence Romain
Dagmar Divjak
P. Milin
11
38
0
09 Nov 2020
A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English
Marius Mosbach
Stefania Degaetano-Ortlieb
Marie-Pauline Krielke
Badr M. Abdullah
Dietrich Klakow
12
6
0
02 Nov 2020
Influence Patterns for Explaining Information Flow in BERT
Kaiji Lu
Zifan Wang
Piotr (Peter) Mardziel
Anupam Datta
GNN
19
16
0
02 Nov 2020
Vec2Sent: Probing Sentence Embeddings with Natural Language Generation
M. Kerscher
Steffen Eger
9
1
0
01 Nov 2020
Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Ryan Steed
Aylin Caliskan
SSL
17
156
0
28 Oct 2020
Deep Clustering of Text Representations for Supervision-free Probing of Syntax
Vikram Gupta
Haoyue Shi
Kevin Gimpel
Mrinmaya Sachan
26
9
0
24 Oct 2020
Applying Occam's Razor to Transformer-Based Dependency Parsing: What Works, What Doesn't, and What is Really Necessary
Stefan Grünewald
Annemarie Friedrich
Jonas Kuhn
56
11
0
23 Oct 2020
Dynamic Contextualized Word Embeddings
Valentin Hofmann
J. Pierrehumbert
Hinrich Schütze
29
51
0
23 Oct 2020
Language Models are Open Knowledge Graphs
Chenguang Wang
Xiao Liu
D. Song
SSL
KELM
24
135
0
22 Oct 2020
Cold-start Active Learning through Self-supervised Language Modeling
Michelle Yuan
Hsuan-Tien Lin
Jordan L. Boyd-Graber
104
180
0
19 Oct 2020
Does Chinese BERT Encode Word Structure?
Yile Wang
Leyang Cui
Yue Zhang
28
6
0
15 Oct 2020
Text Classification Using Label Names Only: A Language Model Self-Training Approach
Yu Meng
Yunyi Zhang
Jiaxin Huang
Chenyan Xiong
Heng Ji
Chao Zhang
Jiawei Han
VLM
53
75
0
14 Oct 2020
Neural Databases
James Thorne
Majid Yazdani
Marzieh Saeidi
Fabrizio Silvestri
Sebastian Riedel
A. Halevy
NAI
26
9
0
14 Oct 2020
A Self-supervised Representation Learning of Sentence Structure for Authorship Attribution
Fereshteh Jafariakinabad
K. Hua
NAI
SSL
13
17
0
14 Oct 2020
Measuring and Reducing Gendered Correlations in Pre-trained Models
Kellie Webster
Xuezhi Wang
Ian Tenney
Alex Beutel
Emily Pitler
Ellie Pavlick
Jilin Chen
Ed Chi
Slav Petrov
FaML
10
250
0
12 Oct 2020
Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Alex Warstadt
Yian Zhang
Haau-Sing Li
Haokun Liu
Samuel R. Bowman
SSL
AI4CE
29
21
0
11 Oct 2020
Unsupervised Distillation of Syntactic Information from Contextualized Word Representations
Shauli Ravfogel
Yanai Elazar
Jacob Goldberger
Yoav Goldberg
8
12
0
11 Oct 2020
Recurrent babbling: evaluating the acquisition of grammar from limited input data
Ludovica Pannitto
Aurélie Herbelot
10
13
0
09 Oct 2020
Precise Task Formalization Matters in Winograd Schema Evaluations
Haokun Liu
William Huang
Dhara Mungra
Samuel R. Bowman
ReLM
17
12
0
08 Oct 2020
Assessing Phrasal Representation and Composition in Transformers
Lang-Chi Yu
Allyson Ettinger
CoGe
14
67
0
08 Oct 2020
Intrinsic Probing through Dimension Selection
Lucas Torroba Hennigen
Adina Williams
Ryan Cotterell
12
57
0
06 Oct 2020
Analyzing Individual Neurons in Pre-trained Language Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Yonatan Belinkov
MILM
6
104
0
06 Oct 2020
On the Sub-Layer Functionalities of Transformer Decoder
Yilin Yang
Longyue Wang
Shuming Shi
Prasad Tadepalli
Stefan Lee
Zhaopeng Tu
22
27
0
06 Oct 2020
On the Interplay Between Fine-tuning and Sentence-level Probing for Linguistic Knowledge in Pre-trained Transformers
Marius Mosbach
A. Khokhlova
Michael A. Hedderich
Dietrich Klakow
15
44
0
06 Oct 2020
Pretrained Language Model Embryology: The Birth of ALBERT
Cheng-Han Chiang
Sung-Feng Huang
Hung-yi Lee
13
39
0
06 Oct 2020
On the Branching Bias of Syntax Extracted from Pre-trained Language Models
Huayang Li
Lemao Liu
Guoping Huang
Shuming Shi
18
6
0
06 Oct 2020
Pareto Probing: Trading Off Accuracy for Complexity
Tiago Pimentel
Naomi Saphra
Adina Williams
Ryan Cotterell
18
60
0
05 Oct 2020
Linguistic Profiling of a Neural Language Model
Alessio Miaschi
D. Brunato
F. Dell’Orletta
Giulia Venturi
23
46
0
05 Oct 2020
My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control
Vitaly Kurin
Maximilian Igl
Tim Rocktaschel
Wendelin Boehmer
Shimon Whiteson
AI4CE
13
85
0
05 Oct 2020
Which *BERT? A Survey Organizing Contextualized Encoders
Patrick Xia
Shijie Wu
Benjamin Van Durme
26
50
0
02 Oct 2020
Measuring Systematic Generalization in Neural Proof Generation with Transformers
Nicolas Angelard-Gontier
Koustuv Sinha
Siva Reddy
C. Pal
LRM
14
63
0
30 Sep 2020
TaxiNLI: Taking a Ride up the NLU Hill
Pratik M. Joshi
Somak Aditya
Aalok Sathe
Monojit Choudhury
12
36
0
30 Sep 2020
What does it mean to be language-agnostic? Probing multilingual sentence encoders for typological properties
Rochelle Choenni
Ekaterina Shutova
12
37
0
27 Sep 2020
Persian Ezafe Recognition Using Transformers and Its Role in Part-Of-Speech Tagging
Ehsan Doostmohammadi
Minoo Nassajian
Adel Rahimi
ViT
16
11
0
20 Sep 2020
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
Jonathan Pilault
Amine Elhattami
C. Pal
CLL
MoE
19
89
0
19 Sep 2020
Previous
1
2
3
...
10
11
7
8
9
Next