ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.06316
  4. Cited By
What do you learn from context? Probing for sentence structure in
  contextualized word representations

What do you learn from context? Probing for sentence structure in contextualized word representations

International Conference on Learning Representations (ICLR), 2019
15 May 2019
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
R. Thomas McCoy
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
ArXiv (abs)PDFHTML

Papers citing "What do you learn from context? Probing for sentence structure in contextualized word representations"

50 / 555 papers shown
Pretraining Text Encoders with Adversarial Mixture of Training Signal
  Generators
Pretraining Text Encoders with Adversarial Mixture of Training Signal GeneratorsInternational Conference on Learning Representations (ICLR), 2022
Yu Meng
Chenyan Xiong
Payal Bajaj
Saurabh Tiwary
Paul N. Bennett
Jiawei Han
Xia Song
MoE
149
17
0
07 Apr 2022
An Exploratory Study on Code Attention in BERT
An Exploratory Study on Code Attention in BERTIEEE International Conference on Program Comprehension (ICPC), 2022
Rishab Sharma
Fuxiang Chen
Fatemeh H. Fard
David Lo
201
33
0
05 Apr 2022
An Analysis of Semantically-Aligned Speech-Text Embeddings
An Analysis of Semantically-Aligned Speech-Text EmbeddingsSpoken Language Technology Workshop (SLT), 2022
M. Huzaifah
Ivan Kukanov
228
10
0
04 Apr 2022
Effect and Analysis of Large-scale Language Model Rescoring on
  Competitive ASR Systems
Effect and Analysis of Large-scale Language Model Rescoring on Competitive ASR SystemsInterspeech (Interspeech), 2022
Takuma Udagawa
Masayuki Suzuki
Gakuto Kurata
N. Itoh
G. Saon
283
30
0
01 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
218
17
0
31 Mar 2022
Metaphors in Pre-Trained Language Models: Probing and Generalization
  Across Datasets and Languages
Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and LanguagesAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Ehsan Aghazadeh
Mohsen Fayyaz
Yadollah Yaghoobzadeh
126
62
0
26 Mar 2022
How does the pre-training objective affect what large language models
  learn about linguistic properties?
How does the pre-training objective affect what large language models learn about linguistic properties?Annual Meeting of the Association for Computational Linguistics (ACL), 2022
Ahmed Alajrami
Nikolaos Aletras
183
22
0
20 Mar 2022
On the Importance of Data Size in Probing Fine-tuned Models
On the Importance of Data Size in Probing Fine-tuned ModelsFindings (Findings), 2022
Houman Mehrafarin
S. Rajaee
Mohammad Taher Pilehvar
162
21
0
17 Mar 2022
Finding Structural Knowledge in Multimodal-BERT
Finding Structural Knowledge in Multimodal-BERTAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Victor Milewski
Miryam de Lhoneux
Marie-Francine Moens
209
12
0
17 Mar 2022
A Simple but Effective Pluggable Entity Lookup Table for Pre-trained
  Language Models
A Simple but Effective Pluggable Entity Lookup Table for Pre-trained Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Deming Ye
Yankai Lin
Peng Li
Maosong Sun
Zhiyuan Liu
KELM
214
11
0
27 Feb 2022
On the data requirements of probing
On the data requirements of probingFindings (Findings), 2022
Zining Zhu
Jixuan Wang
Bai Li
Frank Rudzicz
213
5
0
25 Feb 2022
BERTVision -- A Parameter-Efficient Approach for Question Answering
BERTVision -- A Parameter-Efficient Approach for Question Answering
Siduo Jiang
Cristopher Benge
Will King
98
1
0
24 Feb 2022
Evaluating the Construct Validity of Text Embeddings with Application to
  Survey Questions
Evaluating the Construct Validity of Text Embeddings with Application to Survey QuestionsEPJ Data Science (EPJ Data Sci.), 2022
Qixiang Fang
D. Nguyen
Daniel L. Oberski
200
15
0
18 Feb 2022
Probing Pretrained Models of Source Code
Probing Pretrained Models of Source CodeBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022
Sergey Troshin
Nadezhda Chirkova
ELM
239
47
0
16 Feb 2022
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
ZeroGen: Efficient Zero-shot Learning via Dataset GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Jiacheng Ye
Jiahui Gao
Qintong Li
Hang Xu
Jiangtao Feng
Zhiyong Wu
Tao Yu
Lingpeng Kong
SyDa
327
272
0
16 Feb 2022
Do Transformers Encode a Foundational Ontology? Probing Abstract Classes
  in Natural Language
Do Transformers Encode a Foundational Ontology? Probing Abstract Classes in Natural Language
Mael Jullien
Marco Valentino
André Freitas
216
10
0
25 Jan 2022
A Latent-Variable Model for Intrinsic Probing
A Latent-Variable Model for Intrinsic ProbingAAAI Conference on Artificial Intelligence (AAAI), 2022
Karolina Stañczak
Lucas Torroba Hennigen
Adina Williams
Robert Bamler
Isabelle Augenstein
400
5
0
20 Jan 2022
Zero-Shot and Few-Shot Classification of Biomedical Articles in Context
  of the COVID-19 Pandemic
Zero-Shot and Few-Shot Classification of Biomedical Articles in Context of the COVID-19 Pandemic
Simon Lupart
Benoit Favre
Vassilina Nikoulina
Salah Ait-Mokhtar
252
3
0
09 Jan 2022
Does Entity Abstraction Help Generative Transformers Reason?
Does Entity Abstraction Help Generative Transformers Reason?
Nicolas Angelard-Gontier
Siva Reddy
C. Pal
238
6
0
05 Jan 2022
Discrete and continuous representations and processing in deep learning:
  Looking forward
Discrete and continuous representations and processing in deep learning: Looking forwardAI Open (AO), 2022
Ruben Cartuyvels
Graham Spinks
Marie-Francine Moens
OCL
300
28
0
04 Jan 2022
Is "My Favorite New Movie" My Favorite Movie? Probing the Understanding
  of Recursive Noun Phrases
Is "My Favorite New Movie" My Favorite Movie? Probing the Understanding of Recursive Noun Phrases
Qing Lyu
Hua Zheng
Daoxin Li
Li Zhang
Marianna Apidianaki
Chris Callison-Burch
183
5
0
15 Dec 2021
Linguistic Frameworks Go Toe-to-Toe at Neuro-Symbolic Language Modeling
Linguistic Frameworks Go Toe-to-Toe at Neuro-Symbolic Language Modeling
Jakob Prange
Nathan Schneider
Lingpeng Kong
131
12
0
15 Dec 2021
LMTurk: Few-Shot Learners as Crowdsourcing Workers in a
  Language-Model-as-a-Service Framework
LMTurk: Few-Shot Learners as Crowdsourcing Workers in a Language-Model-as-a-Service Framework
Mengjie Zhao
Fei Mi
Yasheng Wang
Minglei Li
Xin Jiang
Qun Liu
Hinrich Schütze
RALM
281
12
0
14 Dec 2021
Human Guided Exploitation of Interpretable Attention Patterns in
  Summarization and Topic Segmentation
Human Guided Exploitation of Interpretable Attention Patterns in Summarization and Topic Segmentation
Raymond Li
Wen Xiao
Linzi Xing
Lanjun Wang
Gabriel Murray
Giuseppe Carenini
ViT
264
9
0
10 Dec 2021
Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot
  Sentiment Classification
Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot Sentiment ClassificationAAAI Conference on Artificial Intelligence (AAAI), 2021
Zhenhailong Wang
Heng Ji
358
104
0
05 Dec 2021
LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning
  Capabilities for NLI
LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning Capabilities for NLILanguage Resources and Evaluation (LRE), 2021
Ishan Tarunesh
Somak Aditya
Monojit Choudhury
ELMLRM
172
4
0
04 Dec 2021
Probing Linguistic Information For Logical Inference In Pre-trained
  Language Models
Probing Linguistic Information For Logical Inference In Pre-trained Language Models
Zeming Chen
Qiyue Gao
145
11
0
03 Dec 2021
Using Distributional Principles for the Semantic Study of Contextual
  Language Models
Using Distributional Principles for the Semantic Study of Contextual Language Models
Olivier Ferret
131
1
0
23 Nov 2021
Variation and generality in encoding of syntactic anomaly information in
  sentence embeddings
Variation and generality in encoding of syntactic anomaly information in sentence embeddingsBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackBoxNLP), 2021
Qinxuan Wu
Allyson Ettinger
178
2
0
12 Nov 2021
Recent Advances in Automated Question Answering In Biomedical Domain
Recent Advances in Automated Question Answering In Biomedical Domain
K. D. Baksi
178
0
0
10 Nov 2021
Schrödinger's Tree -- On Syntax and Neural Language Models
Schrödinger's Tree -- On Syntax and Neural Language Models
Artur Kulmizev
Joakim Nivre
147
7
0
17 Oct 2021
Semantics-aware Attention Improves Neural Machine Translation
Semantics-aware Attention Improves Neural Machine Translation
Aviv Slobodkin
Leshem Choshen
Omri Abend
182
11
0
13 Oct 2021
Investigating the Impact of Pre-trained Language Models on Dialog
  Evaluation
Investigating the Impact of Pre-trained Language Models on Dialog Evaluation
Chen Zhang
L. F. D’Haro
Yiming Chen
Thomas Friedrichs
Haizhou Li
171
5
0
05 Oct 2021
Structural Persistence in Language Models: Priming as a Window into
  Abstract Language Representations
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
233
48
0
30 Sep 2021
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with
  Controllable Perturbations
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations
Ekaterina Taktasheva
Vladislav Mikhailov
Ekaterina Artemova
226
14
0
28 Sep 2021
Text2Brain: Synthesis of Brain Activation Maps from Free-form Text Query
Text2Brain: Synthesis of Brain Activation Maps from Free-form Text Query
G. Ngo
Minh Le Nguyen
Nancy F. Chen
M. Sabuncu
86
8
0
28 Sep 2021
Micromodels for Efficient, Explainable, and Reusable Systems: A Case
  Study on Mental Health
Micromodels for Efficient, Explainable, and Reusable Systems: A Case Study on Mental Health
Andrew Lee
Jonathan K. Kummerfeld
Lawrence C. An
Amélie Reymond
220
25
0
28 Sep 2021
Sorting through the noise: Testing robustness of information processing
  in pre-trained language models
Sorting through the noise: Testing robustness of information processing in pre-trained language modelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Lalchand Pandia
Allyson Ettinger
178
40
0
25 Sep 2021
Text-based NP Enrichment
Text-based NP EnrichmentTransactions of the Association for Computational Linguistics (TACL), 2021
Yanai Elazar
Victoria Basmov
Yoav Goldberg
Reut Tsarfaty
256
15
0
24 Sep 2021
AES Systems Are Both Overstable And Oversensitive: Explaining Why And
  Proposing Defenses
AES Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing DefensesDialogue and Discourse (DD), 2021
Yaman Kumar Singla
Swapnil Parekh
Somesh Singh
Junjie Li
R. Shah
Changyou Chen
AAML
234
16
0
24 Sep 2021
Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces
  with Pseudowords
Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces with PseudowordsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Taelin Karidi
Yichu Zhou
Nathan Schneider
Omri Abend
Vivek Srikumar
221
15
0
23 Sep 2021
Enriching and Controlling Global Semantics for Text Summarization
Enriching and Controlling Global Semantics for Text SummarizationConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Thong Nguyen
Anh Tuan Luu
Truc Lu
Tho Quan
121
38
0
22 Sep 2021
Awakening Latent Grounding from Pretrained Language Models for Semantic
  Parsing
Awakening Latent Grounding from Pretrained Language Models for Semantic ParsingFindings (Findings), 2021
Qian Liu
Dejian Yang
Jiahui Zhang
Jiaqi Guo
Bin Zhou
Jian-Guang Lou
174
42
0
22 Sep 2021
Does Vision-and-Language Pretraining Improve Lexical Grounding?
Does Vision-and-Language Pretraining Improve Lexical Grounding?
Tian Yun
Chen Sun
Ellie Pavlick
VLMCoGe
234
36
0
21 Sep 2021
Distilling Relation Embeddings from Pre-trained Language Models
Distilling Relation Embeddings from Pre-trained Language Models
Asahi Ushio
Jose Camacho-Collados
Steven Schockaert
192
26
0
21 Sep 2021
Conditional probing: measuring usable information beyond a baseline
Conditional probing: measuring usable information beyond a baseline
John Hewitt
Kawin Ethayarajh
Abigail Z. Jacobs
Christopher D. Manning
208
63
0
19 Sep 2021
What BERT Based Language Models Learn in Spoken Transcripts: An
  Empirical Study
What BERT Based Language Models Learn in Spoken Transcripts: An Empirical Study
Ayush Kumar
Mukuntha Narayanan Sundararaman
Jithendra Vepa
124
12
0
19 Sep 2021
Fine-Tuned Transformers Show Clusters of Similar Representations Across
  Layers
Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers
Jason Phang
Haokun Liu
Samuel R. Bowman
250
36
0
17 Sep 2021
Distilling Linguistic Context for Language Model Compression
Distilling Linguistic Context for Language Model Compression
Geondo Park
Gyeongman Kim
Eunho Yang
172
42
0
17 Sep 2021
Comparing Text Representations: A Theory-Driven Approach
Comparing Text Representations: A Theory-Driven Approach
Gregory Yauney
David M. Mimno
188
7
0
15 Sep 2021
Previous
123...567...101112
Next