ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.03368
  4. Cited By
Designing and Interpreting Probes with Control Tasks

Designing and Interpreting Probes with Control Tasks

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019
8 September 2019
John Hewitt
Abigail Z. Jacobs
ArXiv (abs)PDFHTML

Papers citing "Designing and Interpreting Probes with Control Tasks"

50 / 381 papers shown
Title
On the Pitfalls of Analyzing Individual Neurons in Language Models
On the Pitfalls of Analyzing Individual Neurons in Language Models
Omer Antverg
Yonatan Belinkov
MILM
232
62
0
14 Oct 2021
Global Explainability of BERT-Based Evaluation Metrics by Disentangling
  along Linguistic Factors
Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic FactorsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Marvin Kaster
Wei Zhao
Steffen Eger
248
29
0
08 Oct 2021
BERT4GCN: Using BERT Intermediate Layers to Augment GCN for Aspect-based
  Sentiment Classification
BERT4GCN: Using BERT Intermediate Layers to Augment GCN for Aspect-based Sentiment Classification
Zeguan Xiao
Jiarun Wu
Qingliang Chen
Congjian Deng
138
87
0
01 Oct 2021
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with
  Controllable Perturbations
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations
Ekaterina Taktasheva
Vladislav Mikhailov
Ekaterina Artemova
211
14
0
28 Sep 2021
Awakening Latent Grounding from Pretrained Language Models for Semantic
  Parsing
Awakening Latent Grounding from Pretrained Language Models for Semantic ParsingFindings (Findings), 2021
Qian Liu
Dejian Yang
Jiahui Zhang
Jiaqi Guo
Bin Zhou
Jian-Guang Lou
165
42
0
22 Sep 2021
Conditional probing: measuring usable information beyond a baseline
Conditional probing: measuring usable information beyond a baseline
John Hewitt
Kawin Ethayarajh
Abigail Z. Jacobs
Christopher D. Manning
172
63
0
19 Sep 2021
Grounding Natural Language Instructions: Can Large Language Models
  Capture Spatial Information?
Grounding Natural Language Instructions: Can Large Language Models Capture Spatial Information?
Julia Rozanova
Deborah Ferreira
K. Dubba
Weiwei Cheng
Dell Zhang
André Freitas
LM&Ro
142
12
0
17 Sep 2021
Adversarial Scrubbing of Demographic Information for Text Classification
Adversarial Scrubbing of Demographic Information for Text Classification
Somnath Basu Roy Chowdhury
Sayan Ghosh
Yiyuan Li
Junier B. Oliva
Shashank Srivastava
Snigdha Chaturvedi
148
15
0
17 Sep 2021
Do Language Models Know the Way to Rome?
Do Language Models Know the Way to Rome?
Bastien Liétard
Mostafa Abdou
Anders Søgaard
211
26
0
16 Sep 2021
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?
Sagnik Ray Choudhury
Nikita Bhutani
Isabelle Augenstein
297
1
0
15 Sep 2021
The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with
  Transformer Encoders
The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders
Han He
Jinho Choi
252
126
0
14 Sep 2021
Can Language Models Encode Perceptual Structure Without Grounding? A
  Case Study in Color
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color
Mostafa Abdou
Artur Kulmizev
Daniel Hershcovich
Stella Frank
Ellie Pavlick
Anders Søgaard
194
156
0
13 Sep 2021
Not All Models Localize Linguistic Knowledge in the Same Place: A
  Layer-wise Probing on BERToids' Representations
Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Mohsen Fayyaz
Ehsan Aghazadeh
Ali Modarressi
Hosein Mohebbi
Mohammad Taher Pilehvar
156
21
0
13 Sep 2021
Debiasing Methods in Natural Language Understanding Make Bias More
  Accessible
Debiasing Methods in Natural Language Understanding Make Bias More AccessibleConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Michael J. Mendelson
Yonatan Belinkov
185
27
0
09 Sep 2021
A Bayesian Framework for Information-Theoretic Probing
A Bayesian Framework for Information-Theoretic ProbingConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Tiago Pimentel
Robert Bamler
219
25
0
08 Sep 2021
How much pretraining data do language models need to learn syntax?
How much pretraining data do language models need to learn syntax?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Laura Pérez-Mayos
Miguel Ballesteros
Leo Wanner
133
36
0
07 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
217
5
0
31 Aug 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Neuron-level Interpretation of Deep NLP Models: A SurveyTransactions of the Association for Computational Linguistics (TACL), 2021
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILMAI4CE
301
94
0
30 Aug 2021
Automatic Text Evaluation through the Lens of Wasserstein Barycenters
Automatic Text Evaluation through the Lens of Wasserstein BarycentersConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Pierre Colombo
Guillaume Staerman
Chloé Clavel
Pablo Piantanida
357
43
0
27 Aug 2021
What can Neural Referential Form Selectors Learn?
What can Neural Referential Form Selectors Learn?
Guanyi Chen
F. Same
Kees van Deemter
132
6
0
15 Aug 2021
Post-hoc Interpretability for Neural NLP: A Survey
Post-hoc Interpretability for Neural NLP: A SurveyACM Computing Surveys (CSUR), 2021
Andreas Madsen
Siva Reddy
A. Chandar
XAI
337
276
0
10 Aug 2021
FMMformer: Efficient and Flexible Transformer via Decomposed Near-field
  and Far-field Attention
FMMformer: Efficient and Flexible Transformer via Decomposed Near-field and Far-field AttentionNeural Information Processing Systems (NeurIPS), 2021
T. Nguyen
Vai Suliafu
Stanley J. Osher
Long Chen
Bao Wang
141
38
0
05 Aug 2021
Is My Model Using The Right Evidence? Systematic Probes for Examining
  Evidence-Based Tabular Reasoning
Is My Model Using The Right Evidence? Systematic Probes for Examining Evidence-Based Tabular Reasoning
Vivek Gupta
Riyaz Ahmad Bhat
Atreya Ghosal
Manisha Srivastava
M. Singh
Vivek Srikumar
LMTD
243
19
0
02 Aug 2021
Using a Cross-Task Grid of Linear Probes to Interpret CNN Model
  Predictions On Retinal Images
Using a Cross-Task Grid of Linear Probes to Interpret CNN Model Predictions On Retinal Images
Katy Blumer
Subhashini Venugopalan
Michael P. Brenner
Jon M. Kleinberg
70
0
0
23 Jul 2021
What do End-to-End Speech Models Learn about Speaker, Language and
  Channel Information? A Layer-wise and Neuron-level Analysis
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
344
20
0
01 Jul 2021
A Closer Look at How Fine-tuning Changes BERT
A Closer Look at How Fine-tuning Changes BERTAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Yichu Zhou
Vivek Srikumar
264
77
0
27 Jun 2021
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image
  Representations
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image RepresentationsIEEE Access (IEEE Access), 2021
Witold Oleszkiewicz
Dominika Basaj
Igor Sieradzki
Michal Górszczak
Barbara Rychalska
K. Lewandowska
Tomasz Trzciñski
Bartosz Zieliñski
SSL
175
3
0
21 Jun 2021
Biomedical Interpretable Entity Representations
Biomedical Interpretable Entity RepresentationsFindings (Findings), 2021
Diego Garcia-Olano
Yasumasa Onoe
Ioana Baldini
Joydeep Ghosh
Byron C. Wallace
Kush R. Varshney
AI4CE
185
3
0
17 Jun 2021
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language
  Models
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Matthew Finlayson
Aaron Mueller
Sebastian Gehrmann
Stuart M. Shieber
Tal Linzen
Yonatan Belinkov
345
134
0
10 Jun 2021
Unsupervised Representation Disentanglement of Text: An Evaluation on
  Synthetic Datasets
Unsupervised Representation Disentanglement of Text: An Evaluation on Synthetic DatasetsWorkshop on Representation Learning for NLP (RepL4NLP), 2021
Lan Zhang
Victor Prokhorov
Ehsan Shareghi
CoGeDRL
105
3
0
07 Jun 2021
Causal Abstractions of Neural Networks
Causal Abstractions of Neural NetworksNeural Information Processing Systems (NeurIPS), 2021
Atticus Geiger
Hanson Lu
Thomas Icard
Christopher Potts
NAICML
330
302
0
06 Jun 2021
Uncovering Constraint-Based Behavior in Neural Models via Targeted
  Fine-Tuning
Uncovering Constraint-Based Behavior in Neural Models via Targeted Fine-TuningAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Forrest Davis
Marten van Schijndel
AI4CE
204
7
0
02 Jun 2021
Implicit Representations of Meaning in Neural Language Models
Implicit Representations of Meaning in Neural Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Belinda Z. Li
Maxwell Nye
Jacob Andreas
NAIMILM
255
208
0
01 Jun 2021
Language Model Evaluation Beyond Perplexity
Language Model Evaluation Beyond PerplexityAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Clara Meister
Robert Bamler
439
99
0
31 May 2021
How transfer learning impacts linguistic knowledge in deep NLP models?
How transfer learning impacts linguistic knowledge in deep NLP models?Findings (Findings), 2021
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
146
53
0
31 May 2021
What if This Modified That? Syntactic Interventions via Counterfactual
  Embeddings
What if This Modified That? Syntactic Interventions via Counterfactual EmbeddingsFindings (Findings), 2021
Mycal Tucker
Peng Qian
R. Levy
242
47
0
28 May 2021
Language Models Use Monotonicity to Assess NPI Licensing
Language Models Use Monotonicity to Assess NPI LicensingFindings (Findings), 2021
Jaap Jumelet
Milica Denić
Jakub Szymanik
Dieuwke Hupkes
Shane Steinert-Threlkeld
KELM
161
31
0
28 May 2021
A Non-Linear Structural Probe
A Non-Linear Structural ProbeNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Jennifer C. White
Tiago Pimentel
Naomi Saphra
Robert Bamler
136
32
0
21 May 2021
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Fine-grained Interpretation and Causation Analysis in Deep NLP ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Hassan Sajjad
Narine Kokhlikyan
Fahim Dalvi
Nadir Durrani
MILM
297
8
0
17 May 2021
The Low-Dimensional Linear Geometry of Contextualized Word
  Representations
The Low-Dimensional Linear Geometry of Contextualized Word RepresentationsConference on Computational Natural Language Learning (CoNLL), 2021
Evan Hernandez
Jacob Andreas
MILM
208
53
0
15 May 2021
Counterfactual Interventions Reveal the Causal Effect of Relative Clause
  Representations on Agreement Prediction
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement PredictionConference on Computational Natural Language Learning (CoNLL), 2021
Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
253
67
0
14 May 2021
Are Larger Pretrained Language Models Uniformly Better? Comparing
  Performance at the Instance Level
Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance LevelFindings (Findings), 2021
Ruiqi Zhong
Dhruba Ghosh
Dan Klein
Jacob Steinhardt
154
42
0
13 May 2021
How Reliable are Model Diagnostics?
How Reliable are Model Diagnostics?Findings (Findings), 2021
V. Aribandi
Yi Tay
Donald Metzler
125
19
0
12 May 2021
FNet: Mixing Tokens with Fourier Transforms
FNet: Mixing Tokens with Fourier TransformsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
James Lee-Thorp
Joshua Ainslie
Ilya Eckstein
Santiago Ontanon
592
632
0
09 May 2021
Bird's Eye: Probing for Linguistic Graph Structures with a Simple
  Information-Theoretic Approach
Bird's Eye: Probing for Linguistic Graph Structures with a Simple Information-Theoretic ApproachAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Buse Giledereli
Mrinmaya Sachan
228
11
0
06 May 2021
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and
  Partitionability into Senses
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and Partitionability into SensesTransactions of the Association for Computational Linguistics (TACL), 2021
Aina Garí Soler
Marianna Apidianaki
MILM
400
76
0
29 Apr 2021
Morph Call: Probing Morphosyntactic Content of Multilingual Transformers
Morph Call: Probing Morphosyntactic Content of Multilingual Transformers
Vladislav Mikhailov
O. Serikov
Ekaterina Artemova
228
10
0
26 Apr 2021
Provable Limitations of Acquiring Meaning from Ungrounded Form: What
  Will Future Language Models Understand?
Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?Transactions of the Association for Computational Linguistics (TACL), 2021
William Merrill
Yoav Goldberg
Roy Schwartz
Noah A. Smith
302
75
0
22 Apr 2021
Linguistic Dependencies and Statistical Dependence
Linguistic Dependencies and Statistical DependenceConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Jacob Louis Hoover
Alessandro Sordoni
Wenyu Du
Timothy J. O'Donnell
303
17
0
18 Apr 2021
A multilabel approach to morphosyntactic probing
A multilabel approach to morphosyntactic probingConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Naomi Tachikawa Shapiro
Amandalynne Paullada
Shane Steinert-Threlkeld
209
11
0
17 Apr 2021
Previous
12345678
Next