ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.06316
  4. Cited By
What do you learn from context? Probing for sentence structure in
  contextualized word representations

What do you learn from context? Probing for sentence structure in contextualized word representations

International Conference on Learning Representations (ICLR), 2019
15 May 2019
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
R. Thomas McCoy
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
ArXiv (abs)PDFHTML

Papers citing "What do you learn from context? Probing for sentence structure in contextualized word representations"

50 / 555 papers shown
Title
Exploring Mode Connectivity for Pre-trained Language Models
Exploring Mode Connectivity for Pre-trained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yujia Qin
Cheng Qian
Jing Yi
Weize Chen
Yankai Lin
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
198
26
0
25 Oct 2022
Controlled Text Reduction
Controlled Text ReductionConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Aviv Slobodkin
Paul Roit
Eran Hirsch
Ori Ernst
Ido Dagan
172
11
0
24 Oct 2022
The Better Your Syntax, the Better Your Semantics? Probing Pretrained
  Language Models for the English Comparative Correlative
The Better Your Syntax, the Better Your Semantics? Probing Pretrained Language Models for the English Comparative CorrelativeConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Leonie Weissweiler
Valentin Hofmann
Abdullatif Köksal
Hinrich Schütze
155
42
0
24 Oct 2022
Structural generalization is hard for sequence-to-sequence models
Structural generalization is hard for sequence-to-sequence modelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yuekun Yao
Alexander Koller
131
26
0
24 Oct 2022
An Empirical Revisiting of Linguistic Knowledge Fusion in Language
  Understanding Tasks
An Empirical Revisiting of Linguistic Knowledge Fusion in Language Understanding TasksConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Changlong Yu
Tianyi Xiao
Lingpeng Kong
Yangqiu Song
Wilfred Ng
133
3
0
24 Oct 2022
ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback
ProGen: Progressive Zero-shot Dataset Generation via In-context FeedbackConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Jiacheng Ye
Jiahui Gao
Jiangtao Feng
Zhiyong Wu
Tao Yu
Lingpeng Kong
SyDaVLM
232
88
0
22 Oct 2022
Probing with Noise: Unpicking the Warp and Weft of Embeddings
Probing with Noise: Unpicking the Warp and Weft of EmbeddingsBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022
Filip Klubicka
John D. Kelleher
173
4
0
21 Oct 2022
Exploration of the Usage of Color Terms by Color-blind Participants in
  Online Discussion Platforms
Exploration of the Usage of Color Terms by Color-blind Participants in Online Discussion PlatformsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Ella Rabinovich
Boaz Carmeli
149
1
0
21 Oct 2022
SLING: Sino Linguistic Evaluation of Large Language Models
SLING: Sino Linguistic Evaluation of Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yixiao Song
Kalpesh Krishna
R. Bhatt
Mohit Iyyer
195
14
0
21 Oct 2022
Automatic Document Selection for Efficient Encoder Pretraining
Automatic Document Selection for Efficient Encoder PretrainingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yukun Feng
Patrick Xia
Benjamin Van Durme
João Sedoc
217
13
0
20 Oct 2022
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language
  Classifier Uses Sentiment Information
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment InformationBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022
I. Nejadgholi
Esma Balkir
Kathleen C. Fraser
S. Kiritchenko
178
5
0
19 Oct 2022
Taxonomy of Abstractive Dialogue Summarization: Scenarios, Approaches
  and Future Directions
Taxonomy of Abstractive Dialogue Summarization: Scenarios, Approaches and Future DirectionsACM Computing Surveys (ACM CSUR), 2022
Qi Jia
Yizhu Liu
Siyu Ren
Kenny Q. Zhu
280
10
0
18 Oct 2022
Transparency Helps Reveal When Language Models Learn Meaning
Transparency Helps Reveal When Language Models Learn MeaningTransactions of the Association for Computational Linguistics (TACL), 2022
Zhaofeng Wu
William Merrill
Hao Peng
Iz Beltagy
Noah A. Smith
307
11
0
14 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep ModelsACM Computing Surveys (ACM CSUR), 2022
Julia El Zini
M. Awad
228
109
0
13 Oct 2022
Analyzing Text Representations under Tight Annotation Budgets: Measuring
  Structural Alignment
Analyzing Text Representations under Tight Annotation Budgets: Measuring Structural Alignment
César González-Gutiérrez
Audi Primadhanty
Francesco Cazzaro
A. Quattoni
122
0
0
11 Oct 2022
"No, they did not": Dialogue response dynamics in pre-trained language
  models
"No, they did not": Dialogue response dynamics in pre-trained language modelsInternational Conference on Computational Linguistics (COLING), 2022
Sanghee Kim
Lang-Chi Yu
Allyson Ettinger
136
1
0
05 Oct 2022
Evaluation of taxonomic and neural embedding methods for calculating
  semantic similarity
Evaluation of taxonomic and neural embedding methods for calculating semantic similarityNatural Language Engineering (NLE), 2021
Dongqiang Yang
Yanqin Yin
240
3
0
30 Sep 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A SurveyComputational Linguistics (CL), 2022
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
458
163
0
22 Sep 2022
Negation, Coordination, and Quantifiers in Contextualized Language
  Models
Negation, Coordination, and Quantifiers in Contextualized Language ModelsInternational Conference on Computational Linguistics (COLING), 2022
A. Kalouli
Rita Sevastjanova
C. Beck
Maribel Romero
199
14
0
16 Sep 2022
DECK: Behavioral Tests to Improve Interpretability and Generalizability
  of BERT Models Detecting Depression from Text
DECK: Behavioral Tests to Improve Interpretability and Generalizability of BERT Models Detecting Depression from Text
Jekaterina Novikova
Ksenia Shkaruta
AI4MH
136
6
0
12 Sep 2022
Testing Pre-trained Language Models' Understanding of Distributivity via
  Causal Mediation Analysis
Testing Pre-trained Language Models' Understanding of Distributivity via Causal Mediation AnalysisBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022
Pangbo Ban
Yifan Jiang
Tianran Liu
Shane Steinert-Threlkeld
192
4
0
11 Sep 2022
On the Effectiveness of Compact Biomedical Transformers
On the Effectiveness of Compact Biomedical Transformers
Omid Rohanian
Mohammadmahdi Nouriborji
Samaneh Kouchaki
David Clifton
MedIm
217
38
0
07 Sep 2022
Why Do Neural Language Models Still Need Commonsense Knowledge to Handle
  Semantic Variations in Question Answering?
Why Do Neural Language Models Still Need Commonsense Knowledge to Handle Semantic Variations in Question Answering?
Sunjae Kwon
Cheongwoong Kang
Jiyeon Han
Jaesik Choi
156
0
0
01 Sep 2022
Combating high variance in Data-Scarce Implicit Hate Speech
  Classification
Combating high variance in Data-Scarce Implicit Hate Speech ClassificationIEEE Region 10 Conference (TENCON), 2022
Debaditya Pal
Kaustubh Chaudhari
Harsh Sharma
96
2
0
29 Aug 2022
Lost in Context? On the Sense-wise Variance of Contextualized Word
  Embeddings
Lost in Context? On the Sense-wise Variance of Contextualized Word EmbeddingsIEEE/ACM Transactions on Audio Speech and Language Processing (TASLP), 2022
Yile Wang
Yue Zhang
150
6
0
20 Aug 2022
What Artificial Neural Networks Can Tell Us About Human Language
  Acquisition
What Artificial Neural Networks Can Tell Us About Human Language Acquisition
Alex Warstadt
Samuel R. Bowman
229
134
0
17 Aug 2022
Unit Testing for Concepts in Neural Networks
Unit Testing for Concepts in Neural NetworksTransactions of the Association for Computational Linguistics (TACL), 2022
Charles Lovering
Ellie Pavlick
188
30
0
28 Jul 2022
A Transformer-based Neural Language Model that Synthesizes Brain
  Activation Maps from Free-Form Text Queries
A Transformer-based Neural Language Model that Synthesizes Brain Activation Maps from Free-Form Text Queries
G. Ngo
Minh Le Nguyen
Nancy F. Chen
M. Sabuncu
MedIm
94
8
0
24 Jul 2022
Pretraining on Interactions for Learning Grounded Affordance
  Representations
Pretraining on Interactions for Learning Grounded Affordance Representations
Jack Merullo
Dylan Ebert
Carsten Eickhoff
Ellie Pavlick
187
5
0
05 Jul 2022
Probing via Prompting
Probing via PromptingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Jiaoda Li
Robert Bamler
Mrinmaya Sachan
235
14
0
04 Jul 2022
Is neural language acquisition similar to natural? A chronological
  probing study
Is neural language acquisition similar to natural? A chronological probing studyComputational Linguistics and Intellectual Technologies (CLIT), 2022
E. Voloshina
O. Serikov
Tatiana Shavrina
219
4
0
01 Jul 2022
A Unified Understanding of Deep NLP Models for Text Classification
A Unified Understanding of Deep NLP Models for Text ClassificationIEEE Transactions on Visualization and Computer Graphics (TVCG), 2022
Zhuguo Li
Xiting Wang
Weikai Yang
Jing Wu
Zhengyan Zhang
Zhiyuan Liu
Maosong Sun
Hui Zhang
Shixia Liu
VLM
115
36
0
19 Jun 2022
AnyMorph: Learning Transferable Polices By Inferring Agent Morphology
AnyMorph: Learning Transferable Polices By Inferring Agent MorphologyInternational Conference on Machine Learning (ICML), 2022
Brandon Trabucco
Mariano Phielipp
Glen Berseth
150
35
0
17 Jun 2022
Transition-based Abstract Meaning Representation Parsing with Contextual
  Embeddings
Transition-based Abstract Meaning Representation Parsing with Contextual Embeddings
Yi Liang
197
0
0
13 Jun 2022
Sort by Structure: Language Model Ranking as Dependency Probing
Sort by Structure: Language Model Ranking as Dependency ProbingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Max Müller-Eberstein
Rob van der Goot
Barbara Plank
202
3
0
10 Jun 2022
Abstraction not Memory: BERT and the English Article System
Abstraction not Memory: BERT and the English Article SystemNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Harish Tayyar Madabushi
Dagmar Divjak
P. Milin
95
5
0
08 Jun 2022
Latent Topology Induction for Understanding Contextualized
  Representations
Latent Topology Induction for Understanding Contextualized Representations
Yao Fu
Mirella Lapata
BDL
680
7
0
03 Jun 2022
Garden-Path Traversal in GPT-2
Garden-Path Traversal in GPT-2BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022
William Jurayj
William Rudman
Carsten Eickhoff
82
6
0
24 May 2022
What company do words keep? Revisiting the distributional semantics of
  J.R. Firth & Zellig Harris
What company do words keep? Revisiting the distributional semantics of J.R. Firth & Zellig HarrisNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Mikael Brunila
J. LaViolette
230
23
0
16 May 2022
Discovering Latent Concepts Learned in BERT
Discovering Latent Concepts Learned in BERTInternational Conference on Learning Representations (ICLR), 2022
Fahim Dalvi
A. Khan
Firoj Alam
Nadir Durrani
Jia Xu
Hassan Sajjad
SSL
148
67
0
15 May 2022
Improving Contextual Representation with Gloss Regularized Pre-training
Improving Contextual Representation with Gloss Regularized Pre-training
Yu Lin
Zhecheng An
Peihao Wu
Zejun Ma
274
5
0
13 May 2022
ElitePLM: An Empirical Study on General Language Ability Evaluation of
  Pretrained Language Models
ElitePLM: An Empirical Study on General Language Ability Evaluation of Pretrained Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Junyi Li
Tianyi Tang
Zheng Gong
Lixin Yang
Zhuohao Yu
Zhongfu Chen
Jingyuan Wang
Wayne Xin Zhao
Ji-Rong Wen
LM&MAELM
77
8
0
03 May 2022
Probing for the Usage of Grammatical Number
Probing for the Usage of Grammatical NumberAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Karim Lasri
Tiago Pimentel
Alessandro Lenci
Thierry Poibeau
Robert Bamler
282
64
0
19 Apr 2022
Probing Script Knowledge from Pre-Trained Models
Probing Script Knowledge from Pre-Trained Models
Zijian Jin
Xingyu Zhang
Mo Yu
Lifu Huang
151
5
0
16 Apr 2022
On the Role of Pre-trained Language Models in Word Ordering: A Case
  Study with BART
On the Role of Pre-trained Language Models in Word Ordering: A Case Study with BARTInternational Conference on Computational Linguistics (COLING), 2022
Zebin Ou
Meishan Zhang
Yue Zhang
129
3
0
15 Apr 2022
Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in
  Natural Language Understanding
Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in Natural Language UnderstandingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Zeming Chen
Qiyue Gao
ELM
237
5
0
13 Apr 2022
Probing for Constituency Structure in Neural Language Models
Probing for Constituency Structure in Neural Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
David Arps
Younes Samih
Laura Kallmeyer
Hassan Sajjad
160
18
0
13 Apr 2022
A Review on Language Models as Knowledge Bases
A Review on Language Models as Knowledge Bases
Badr AlKhamissi
Millicent Li
Asli Celikyilmaz
Mona T. Diab
Marjan Ghazvininejad
KELM
280
208
0
12 Apr 2022
What do Toothbrushes do in the Kitchen? How Transformers Think our World
  is Structured
What do Toothbrushes do in the Kitchen? How Transformers Think our World is StructuredNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Alexander Henlein
Alexander Mehler
86
6
0
12 Apr 2022
A Comparative Study of Pre-trained Encoders for Low-Resource Named
  Entity Recognition
A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity RecognitionWorkshop on Representation Learning for NLP (RepL4NLP), 2022
Yuxuan Chen
Jonas Mikkelsen
Arne Binder
Christoph Alt
Leonhard Hennig
160
3
0
11 Apr 2022
Previous
123456...101112
Next