Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.06316
Cited By
What do you learn from context? Probing for sentence structure in contextualized word representations
15 May 2019
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
R. Thomas McCoy
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What do you learn from context? Probing for sentence structure in contextualized word representations"
50 / 532 papers shown
Title
BERT Embeddings for Automatic Readability Assessment
Joseph Marvin Imperial
8
36
0
15 Jun 2021
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
37
813
0
14 Jun 2021
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Matthew Finlayson
Aaron Mueller
Sebastian Gehrmann
Stuart M. Shieber
Tal Linzen
Yonatan Belinkov
27
101
0
10 Jun 2021
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
Jonas Wallat
Jaspreet Singh
Avishek Anand
CLL
KELM
22
58
0
05 Jun 2021
Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Yichen Jiang
Asli Celikyilmaz
P. Smolensky
Paul Soulos
Sudha Rao
Hamid Palangi
Roland Fernandez
Caitlin Smith
Mohit Bansal
Jianfeng Gao
24
19
0
02 Jun 2021
John praised Mary because he? Implicit Causality Bias and Its Interaction with Explicit Cues in LMs
Yova Kementchedjhieva
Mark Anderson
Anders Søgaard
8
13
0
02 Jun 2021
Using Integrated Gradients and Constituency Parse Trees to explain Linguistic Acceptability learnt by BERT
Anmol Nayak
Hariprasad Timmapathini
19
4
0
01 Jun 2021
Corpus-Based Paraphrase Detection Experiments and Review
T. Vrbanec
A. Meštrović
14
31
0
31 May 2021
On the Interplay Between Fine-tuning and Composition in Transformers
Lang-Chi Yu
Allyson Ettinger
17
14
0
31 May 2021
Alleviating the Knowledge-Language Inconsistency: A Study for Deep Commonsense Knowledge
Yi Zhang
Lei Li
Yunfang Wu
Qi Su
Xu Sun
17
4
0
28 May 2021
Inspecting the concept knowledge graph encoded by modern language models
Carlos Aspillaga
Marcelo Mendoza
Alvaro Soto
17
13
0
27 May 2021
LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and Beyond
Daniel Loureiro
A. Jorge
Jose Camacho-Collados
33
26
0
26 May 2021
The Low-Dimensional Linear Geometry of Contextualized Word Representations
Evan Hernandez
Jacob Andreas
MILM
15
40
0
15 May 2021
How Reliable are Model Diagnostics?
V. Aribandi
Yi Tay
Donald Metzler
19
19
0
12 May 2021
Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Laura Pérez-Mayos
Alba Táboas García
Simon Mille
Leo Wanner
ELM
LRM
16
8
0
10 May 2021
Bird's Eye: Probing for Linguistic Graph Structures with a Simple Information-Theoretic Approach
Yifan Hou
Mrinmaya Sachan
11
10
0
06 May 2021
Morph Call: Probing Morphosyntactic Content of Multilingual Transformers
Vladislav Mikhailov
O. Serikov
Ekaterina Artemova
12
9
0
26 Apr 2021
Improving BERT Pretraining with Syntactic Supervision
Georgios Tziafas
Konstantinos Kogkalidis
G. Wijnholds
M. Moortgat
25
3
0
21 Apr 2021
Enhancing Cognitive Models of Emotions with Representation Learning
Yuting Guo
Jinho D. Choi
23
5
0
20 Apr 2021
Probing Across Time: What Does RoBERTa Know and When?
Leo Z. Liu
Yizhong Wang
Jungo Kasai
Hannaneh Hajishirzi
Noah A. Smith
KELM
6
80
0
16 Apr 2021
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models
Karolina Stañczak
Sagnik Ray Choudhury
Tiago Pimentel
Ryan Cotterell
Isabelle Augenstein
17
23
0
15 Apr 2021
Effect of Post-processing on Contextualized Word Representations
Hassan Sajjad
Firoj Alam
Fahim Dalvi
Nadir Durrani
6
9
0
15 Apr 2021
An Interpretability Illusion for BERT
Tolga Bolukbasi
Adam Pearce
Ann Yuan
Andy Coenen
Emily Reif
Fernanda Viégas
Martin Wattenberg
MILM
FAtt
24
68
0
14 Apr 2021
Mediators in Determining what Processing BERT Performs First
Aviv Slobodkin
Leshem Choshen
Omri Abend
MoE
60
15
0
13 Apr 2021
SpartQA: : A Textual Question Answering Benchmark for Spatial Reasoning
Roshanak Mirzaee
Hossein Rajaby Faghihi
Qiang Ning
Parisa Kordjmashidi
12
76
0
12 Apr 2021
Evaluating Saliency Methods for Neural Language Models
Shuoyang Ding
Philipp Koehn
FAtt
XAI
21
54
0
12 Apr 2021
Does My Representation Capture X? Probe-Ably
Deborah Ferreira
Julia Rozanova
Mokanarangan Thayaparan
Marco Valentino
André Freitas
11
11
0
12 Apr 2021
On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies
Tianyi Zhang
Tatsunori Hashimoto
AI4CE
24
29
0
12 Apr 2021
Factual Probing Is [MASK]: Learning vs. Learning to Recall
Zexuan Zhong
Dan Friedman
Danqi Chen
6
403
0
12 Apr 2021
Connecting Attributions and QA Model Behavior on Realistic Counterfactuals
Xi Ye
Rohan Nair
Greg Durrett
16
24
0
09 Apr 2021
Transformers: "The End of History" for NLP?
Anton Chernyavskiy
Dmitry Ilvovsky
Preslav Nakov
39
30
0
09 Apr 2021
Probing BERT in Hyperbolic Spaces
Boli Chen
Yao Fu
Guangwei Xu
Pengjun Xie
Chuanqi Tan
Mosha Chen
L. Jing
13
56
0
08 Apr 2021
Better Neural Machine Translation by Extracting Linguistic Information from BERT
Hassan S. Shavarani
Anoop Sarkar
16
15
0
07 Apr 2021
Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Hosein Mohebbi
Ali Modarressi
Mohammad Taher Pilehvar
MILM
11
23
0
03 Apr 2021
Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors
Zeyu Yun
Yubei Chen
Bruno A. Olshausen
Yann LeCun
9
71
0
29 Mar 2021
Synthesis of Compositional Animations from Textual Descriptions
Anindita Ghosh
N. Cheema
Cennet Oguz
Christian Theobalt
P. Slusallek
29
169
0
26 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
33
48
0
20 Mar 2021
The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models
Go Inoue
Bashar Alhafni
Nurpeiis Baimukan
Houda Bouamor
Nizar Habash
35
223
0
11 Mar 2021
Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to Do
P. Schramowski
Cigdem Turan
Nico Andersen
Constantin Rothkopf
Kristian Kersting
25
281
0
08 Mar 2021
Few-shot Learning for Slot Tagging with Attentive Relational Network
Cennet Oguz
Ngoc Thang Vu
20
10
0
03 Mar 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
29
8
0
02 Mar 2021
Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages
Rajaswa Patil
Jasleen Dhillon
Siddhant Mahurkar
Saumitra Kulkarni
M. Malhotra
V. Baths
15
1
0
01 Mar 2021
RuSentEval: Linguistic Source, Encoder Force!
Vladislav Mikhailov
Ekaterina Taktasheva
Elina Sigdel
Ekaterina Artemova
VLM
19
6
0
28 Feb 2021
Chess as a Testbed for Language Model State Tracking
Shubham Toshniwal
Sam Wiseman
Karen Livescu
Kevin Gimpel
22
48
0
26 Feb 2021
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
226
404
0
24 Feb 2021
Using Prior Knowledge to Guide BERT's Attention in Semantic Textual Matching Tasks
Tingyu Xia
Yue Wang
Yuan Tian
Yi-Ju Chang
14
51
0
22 Feb 2021
Evaluating Contextualized Language Models for Hungarian
Judit Ács
Dániel Lévai
D. Nemeskey
András Kornai
12
1
0
22 Feb 2021
The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point
Magnus Sahlgren
F. Carlsson
22
26
0
08 Feb 2021
Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration
Betty van Aken
Jens-Michalis Papaioannou
M. Mayrdorfer
Klemens Budde
Felix Alexander Gers
Alexander Loser
17
67
0
08 Feb 2021
On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Laura Pérez-Mayos
Roberto Carlini
Miguel Ballesteros
Leo Wanner
19
7
0
27 Jan 2021
Previous
1
2
3
...
10
11
6
7
8
9
Next