Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained
ModelsWorkshop on Representation Learning for NLP (RepL4NLP), 2021 |
Probing Across Time: What Does RoBERTa Know and When?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
Syntactic Perturbations Reveal Representational Correlates of
Hierarchical Phrase Structure in Pretrained Language ModelsWorkshop on Representation Learning for NLP (RepL4NLP), 2021 |
What's in your Head? Emergent Behaviour in Multi-Task Transformer ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
DirectProbe: Studying Representations without ClassifiersNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
Does My Representation Capture X? Probe-AblyAnnual Meeting of the Association for Computational Linguistics (ACL), 2021 |
Joint Universal Syntactic and Semantic ParsingTransactions of the Association for Computational Linguistics (TACL), 2021 |
Factual Probing Is [MASK]: Learning vs. Learning to RecallNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
Does syntax matter? A strong baseline for Aspect-based Sentiment
Analysis with RoBERTaNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model
PerformanceNordic Conference of Computational Linguistics (NoDaLiDa), 2021 |
Adapting Language Models for Zero-shot Learning by Meta-tuning on
Dataset and Prompt CollectionsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
Connecting Attributions and QA Model Behavior on Realistic
CounterfactualsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
Low-Complexity Probing via Finding SubnetworksNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
Exploring the Role of BERT Token Representations to Explain Sentence
Probing ResultsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
Dodrio: Exploring Transformer Models with Interactive VisualizationAnnual Meeting of the Association for Computational Linguistics (ACL), 2021 |
Local Interpretations for Explainable Natural Language Processing: A
SurveyACM Computing Surveys (CSUR), 2021 |
The Rediscovery Hypothesis: Language Models Need to Meet LinguisticsJournal of Artificial Intelligence Research (JAIR), 2021 |
Contrastive Explanations for Model InterpretabilityConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
Chess as a Testbed for Language Model State TrackingAAAI Conference on Artificial Intelligence (AAAI), 2021 |
Probing Classifiers: Promises, Shortcomings, and AdvancesInternational Conference on Computational Logic (ICCL), 2021 |
Probing Multimodal Embeddings for Linguistic Properties: the
Visual-Semantic CaseInternational Conference on Computational Linguistics (COLING), 2021 |
Decoupling the Role of Data, Attention, and Losses in Multimodal
TransformersTransactions of the Association for Computational Linguistics (TACL), 2021 |
First Align, then Predict: Understanding the Cross-Lingual Ability of
Multilingual BERTConference of the European Chapter of the Association for Computational Linguistics (EACL), 2021 |
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERTConference of the European Chapter of the Association for Computational Linguistics (EACL), 2021 |
Learning Contextual Representations for Semantic Parsing with
Generation-Augmented Pre-TrainingAAAI Conference on Artificial Intelligence (AAAI), 2020 |
Exploring Neural Entity Representations for Semantic InformationBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020 |
diagNNose: A Library for Neural Activation AnalysisBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020 |
Influence Patterns for Explaining Information Flow in BERTNeural Information Processing Systems (NeurIPS), 2020 |
AutoPrompt: Eliciting Knowledge from Language Models with Automatically
Generated PromptsConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
Unsupervised Distillation of Syntactic Information from Contextualized
Word RepresentationsBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020 |
On the Interplay Between Fine-tuning and Sentence-level Probing for
Linguistic Knowledge in Pre-trained TransformersBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020 |
Pretrained Language Model Embryology: The Birth of ALBERTConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
Pareto Probing: Trading Off Accuracy for ComplexityConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
Linguistic Profiling of a Neural Language ModelInternational Conference on Computational Linguistics (COLING), 2020 |
Which *BERT? A Survey Organizing Contextualized EncodersConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
Examining the rhetorical capacities of neural language modelsBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020 |
Dissecting Lottery Ticket Transformers: Structural and Behavioral Study
of Sparse Neural Machine TranslationBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020 |