ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.06316
  4. Cited By
What do you learn from context? Probing for sentence structure in
  contextualized word representations

What do you learn from context? Probing for sentence structure in contextualized word representations

15 May 2019
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
R. Thomas McCoy
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
ArXivPDFHTML

Papers citing "What do you learn from context? Probing for sentence structure in contextualized word representations"

50 / 532 papers shown
Title
Using Distributional Principles for the Semantic Study of Contextual
  Language Models
Using Distributional Principles for the Semantic Study of Contextual Language Models
Olivier Ferret
17
1
0
23 Nov 2021
Variation and generality in encoding of syntactic anomaly information in
  sentence embeddings
Variation and generality in encoding of syntactic anomaly information in sentence embeddings
Qinxuan Wu
Allyson Ettinger
23
2
0
12 Nov 2021
Recent Advances in Automated Question Answering In Biomedical Domain
Recent Advances in Automated Question Answering In Biomedical Domain
K. D. Baksi
14
0
0
10 Nov 2021
Schrödinger's Tree -- On Syntax and Neural Language Models
Schrödinger's Tree -- On Syntax and Neural Language Models
Artur Kulmizev
Joakim Nivre
30
6
0
17 Oct 2021
Semantics-aware Attention Improves Neural Machine Translation
Semantics-aware Attention Improves Neural Machine Translation
Aviv Slobodkin
Leshem Choshen
Omri Abend
27
11
0
13 Oct 2021
Investigating the Impact of Pre-trained Language Models on Dialog
  Evaluation
Investigating the Impact of Pre-trained Language Models on Dialog Evaluation
Chen Zhang
L. F. D’Haro
Yiming Chen
Thomas Friedrichs
Haizhou Li
13
5
0
05 Oct 2021
Structural Persistence in Language Models: Priming as a Window into
  Abstract Language Representations
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
56
38
0
30 Sep 2021
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with
  Controllable Perturbations
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations
Ekaterina Taktasheva
Vladislav Mikhailov
Ekaterina Artemova
8
13
0
28 Sep 2021
Text2Brain: Synthesis of Brain Activation Maps from Free-form Text Query
Text2Brain: Synthesis of Brain Activation Maps from Free-form Text Query
G. Ngo
Minh Le Nguyen
Nancy F. Chen
M. Sabuncu
11
4
0
28 Sep 2021
Micromodels for Efficient, Explainable, and Reusable Systems: A Case
  Study on Mental Health
Micromodels for Efficient, Explainable, and Reusable Systems: A Case Study on Mental Health
Andrew Lee
Jonathan K. Kummerfeld
Lawrence C. An
Rada Mihalcea
38
22
0
28 Sep 2021
Sorting through the noise: Testing robustness of information processing
  in pre-trained language models
Sorting through the noise: Testing robustness of information processing in pre-trained language models
Lalchand Pandia
Allyson Ettinger
36
37
0
25 Sep 2021
Text-based NP Enrichment
Text-based NP Enrichment
Yanai Elazar
Victoria Basmov
Yoav Goldberg
Reut Tsarfaty
52
14
0
24 Sep 2021
AES Systems Are Both Overstable And Oversensitive: Explaining Why And
  Proposing Defenses
AES Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses
Yaman Kumar Singla
Swapnil Parekh
Somesh Singh
J. Li
R. Shah
Changyou Chen
AAML
27
14
0
24 Sep 2021
Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces
  with Pseudowords
Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces with Pseudowords
Taelin Karidi
Yichu Zhou
Nathan Schneider
Omri Abend
Vivek Srikumar
78
13
0
23 Sep 2021
Enriching and Controlling Global Semantics for Text Summarization
Enriching and Controlling Global Semantics for Text Summarization
Thong Nguyen
A. Luu
Truc Lu
Tho Quan
27
34
0
22 Sep 2021
Awakening Latent Grounding from Pretrained Language Models for Semantic
  Parsing
Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing
Qian Liu
Dejian Yang
Jiahui Zhang
Jiaqi Guo
Bin Zhou
Jian-Guang Lou
43
41
0
22 Sep 2021
Does Vision-and-Language Pretraining Improve Lexical Grounding?
Does Vision-and-Language Pretraining Improve Lexical Grounding?
Tian Yun
Chen Sun
Ellie Pavlick
VLM
CoGe
32
30
0
21 Sep 2021
Distilling Relation Embeddings from Pre-trained Language Models
Distilling Relation Embeddings from Pre-trained Language Models
Asahi Ushio
Jose Camacho-Collados
Steven Schockaert
19
21
0
21 Sep 2021
Conditional probing: measuring usable information beyond a baseline
Conditional probing: measuring usable information beyond a baseline
John Hewitt
Kawin Ethayarajh
Percy Liang
Christopher D. Manning
31
55
0
19 Sep 2021
What BERT Based Language Models Learn in Spoken Transcripts: An
  Empirical Study
What BERT Based Language Models Learn in Spoken Transcripts: An Empirical Study
Ayush Kumar
Mukuntha Narayanan Sundararaman
Jithendra Vepa
17
10
0
19 Sep 2021
Fine-Tuned Transformers Show Clusters of Similar Representations Across
  Layers
Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers
Jason Phang
Haokun Liu
Samuel R. Bowman
22
25
0
17 Sep 2021
Distilling Linguistic Context for Language Model Compression
Distilling Linguistic Context for Language Model Compression
Geondo Park
Gyeongman Kim
Eunho Yang
45
37
0
17 Sep 2021
Comparing Text Representations: A Theory-Driven Approach
Comparing Text Representations: A Theory-Driven Approach
Gregory Yauney
David M. Mimno
26
6
0
15 Sep 2021
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?
Sagnik Ray Choudhury
Nikita Bhutani
Isabelle Augenstein
11
1
0
15 Sep 2021
The Grammar-Learning Trajectories of Neural Language Models
The Grammar-Learning Trajectories of Neural Language Models
Leshem Choshen
Guy Hacohen
D. Weinshall
Omri Abend
27
28
0
13 Sep 2021
Not All Models Localize Linguistic Knowledge in the Same Place: A
  Layer-wise Probing on BERToids' Representations
Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Mohsen Fayyaz
Ehsan Aghazadeh
Ali Modarressi
Hosein Mohebbi
Mohammad Taher Pilehvar
18
21
0
13 Sep 2021
COMBO: State-of-the-Art Morphosyntactic Analysis
COMBO: State-of-the-Art Morphosyntactic Analysis
Mateusz Klimaszewski
Alina Wróblewska
AI4CE
11
4
0
11 Sep 2021
Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense
  Language Understanding
Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding
Shane Storks
Qiaozi Gao
Yichi Zhang
J. Chai
ReLM
LRM
39
22
0
10 Sep 2021
Beyond the Tip of the Iceberg: Assessing Coherence of Text Classifiers
Beyond the Tip of the Iceberg: Assessing Coherence of Text Classifiers
Shane Storks
J. Chai
43
5
0
10 Sep 2021
How Does Fine-tuning Affect the Geometry of Embedding Space: A Case
  Study on Isotropy
How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy
S. Rajaee
Mohammad Taher Pilehvar
71
20
0
10 Sep 2021
A Bayesian Framework for Information-Theoretic Probing
A Bayesian Framework for Information-Theoretic Probing
Tiago Pimentel
Ryan Cotterell
20
24
0
08 Sep 2021
How much pretraining data do language models need to learn syntax?
How much pretraining data do language models need to learn syntax?
Laura Pérez-Mayos
Miguel Ballesteros
Leo Wanner
6
32
0
07 Sep 2021
An Empirical Study on Leveraging Position Embeddings for Target-oriented
  Opinion Words Extraction
An Empirical Study on Leveraging Position Embeddings for Target-oriented Opinion Words Extraction
Samuel Mensah
Kai Sun
Nikolaos Aletras
10
16
0
02 Sep 2021
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Linyang Li
Demin Song
Xiaonan Li
Jiehang Zeng
Ruotian Ma
Xipeng Qiu
22
133
0
31 Aug 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
18
4
0
31 Aug 2021
Rethinking Why Intermediate-Task Fine-Tuning Works
Rethinking Why Intermediate-Task Fine-Tuning Works
Ting-Yun Chang
Chi-Jen Lu
LRM
19
29
0
26 Aug 2021
What do pre-trained code models know about code?
What do pre-trained code models know about code?
Anjan Karmakar
Romain Robbes
ELM
24
87
0
25 Aug 2021
Post-hoc Interpretability for Neural NLP: A Survey
Post-hoc Interpretability for Neural NLP: A Survey
Andreas Madsen
Siva Reddy
A. Chandar
XAI
19
222
0
10 Aug 2021
Grounding Representation Similarity with Statistical Testing
Grounding Representation Similarity with Statistical Testing
Frances Ding
Jean-Stanislas Denain
Jacob Steinhardt
16
30
0
03 Aug 2021
Local Structure Matters Most: Perturbation Study in NLU
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre
Prasanna Parthasarathi
Amal Zouaq
Sarath Chandar
22
13
0
29 Jul 2021
Language Models as Zero-shot Visual Semantic Learners
Language Models as Zero-shot Visual Semantic Learners
Yue Jiao
Jonathon S. Hare
Adam Prugel-Bennett
VLM
11
0
0
26 Jul 2021
Theoretical foundations and limits of word embeddings: what types of
  meaning can they capture?
Theoretical foundations and limits of word embeddings: what types of meaning can they capture?
Alina Arseniev-Koehler
28
19
0
22 Jul 2021
Trusting RoBERTa over BERT: Insights from CheckListing the Natural
  Language Inference Task
Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task
Ishan Tarunesh
Somak Aditya
Monojit Choudhury
13
17
0
15 Jul 2021
What do writing features tell us about AI papers?
What do writing features tell us about AI papers?
Zining Zhu
Bai Li
Yang Xu
Frank Rudzicz
12
0
0
13 Jul 2021
A Flexible Multi-Task Model for BERT Serving
A Flexible Multi-Task Model for BERT Serving
Tianwen Wei
Jianwei Qi
Shenghuang He
26
7
0
12 Jul 2021
A Survey on Data Augmentation for Text Classification
A Survey on Data Augmentation for Text Classification
Markus Bayer
M. Kaufhold
Christian A. Reuter
28
334
0
07 Jul 2021
A Closer Look at How Fine-tuning Changes BERT
A Closer Look at How Fine-tuning Changes BERT
Yichu Zhou
Vivek Srikumar
24
63
0
27 Jun 2021
Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge
  Bases
Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases
Boxi Cao
Hongyu Lin
Xianpei Han
Le Sun
Lingyong Yan
M. Liao
Tong Xue
Jin Xu
13
130
0
17 Jun 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
  of Head and Prompt Tuning
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
22
96
0
17 Jun 2021
Coreference-Aware Dialogue Summarization
Coreference-Aware Dialogue Summarization
Zhengyuan Liu
Ke Shi
Nancy F. Chen
16
59
0
16 Jun 2021
Previous
123...567...91011
Next