
Title |
|---|
![]() Exploring Mode Connectivity for Pre-trained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Controlled Text ReductionConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() The Better Your Syntax, the Better Your Semantics? Probing Pretrained
Language Models for the English Comparative CorrelativeConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Structural generalization is hard for sequence-to-sequence modelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() An Empirical Revisiting of Linguistic Knowledge Fusion in Language
Understanding TasksConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() ProGen: Progressive Zero-shot Dataset Generation via In-context FeedbackConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Probing with Noise: Unpicking the Warp and Weft of EmbeddingsBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022 |
![]() Exploration of the Usage of Color Terms by Color-blind Participants in
Online Discussion PlatformsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() SLING: Sino Linguistic Evaluation of Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Automatic Document Selection for Efficient Encoder PretrainingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Towards Procedural Fairness: Uncovering Biases in How a Toxic Language
Classifier Uses Sentiment InformationBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022 |
![]() Taxonomy of Abstractive Dialogue Summarization: Scenarios, Approaches
and Future DirectionsACM Computing Surveys (ACM CSUR), 2022 |
![]() Transparency Helps Reveal When Language Models Learn MeaningTransactions of the Association for Computational Linguistics (TACL), 2022 |
![]() On the Explainability of Natural Language Processing Deep ModelsACM Computing Surveys (ACM CSUR), 2022 |
![]() "No, they did not": Dialogue response dynamics in pre-trained language
modelsInternational Conference on Computational Linguistics (COLING), 2022 |
![]() Evaluation of taxonomic and neural embedding methods for calculating
semantic similarityNatural Language Engineering (NLE), 2021 |
![]() Towards Faithful Model Explanation in NLP: A SurveyComputational Linguistics (CL), 2022 |
![]() Negation, Coordination, and Quantifiers in Contextualized Language
ModelsInternational Conference on Computational Linguistics (COLING), 2022 |
![]() Testing Pre-trained Language Models' Understanding of Distributivity via
Causal Mediation AnalysisBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022 |
![]() Combating high variance in Data-Scarce Implicit Hate Speech
ClassificationIEEE Region 10 Conference (TENCON), 2022 |
![]() Lost in Context? On the Sense-wise Variance of Contextualized Word
EmbeddingsIEEE/ACM Transactions on Audio Speech and Language Processing (TASLP), 2022 Yile Wang Yue Zhang |
![]() Unit Testing for Concepts in Neural NetworksTransactions of the Association for Computational Linguistics (TACL), 2022 |
![]() Probing via PromptingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
![]() Is neural language acquisition similar to natural? A chronological
probing studyComputational Linguistics and Intellectual Technologies (CLIT), 2022 |
![]() A Unified Understanding of Deep NLP Models for Text ClassificationIEEE Transactions on Visualization and Computer Graphics (TVCG), 2022 |
![]() AnyMorph: Learning Transferable Polices By Inferring Agent MorphologyInternational Conference on Machine Learning (ICML), 2022 |
![]() Sort by Structure: Language Model Ranking as Dependency ProbingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
![]() Abstraction not Memory: BERT and the English Article SystemNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
![]() Garden-Path Traversal in GPT-2BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022 |
![]() What company do words keep? Revisiting the distributional semantics of
J.R. Firth & Zellig HarrisNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
![]() Discovering Latent Concepts Learned in BERTInternational Conference on Learning Representations (ICLR), 2022 |
![]() ElitePLM: An Empirical Study on General Language Ability Evaluation of
Pretrained Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
![]() Probing for the Usage of Grammatical NumberAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
![]() On the Role of Pre-trained Language Models in Word Ordering: A Case
Study with BARTInternational Conference on Computational Linguistics (COLING), 2022 |
![]() Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in
Natural Language UnderstandingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
![]() Probing for Constituency Structure in Neural Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() What do Toothbrushes do in the Kitchen? How Transformers Think our World
is StructuredNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
![]() A Comparative Study of Pre-trained Encoders for Low-Resource Named
Entity RecognitionWorkshop on Representation Learning for NLP (RepL4NLP), 2022 |