Pre-training Is (Almost) All You Need: An Application to Commonsense
ReasoningAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
Recipes for building an open-domain chatbotConference of the European Chapter of the Association for Computational Linguistics (EACL), 2020 |
The Gutenberg Dialogue DatasetConference of the European Chapter of the Association for Computational Linguistics (EACL), 2020 |
Grounding Conversations with Improvised DialoguesAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
Reverse Engineering Configurations of Neural Text Generation ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
Unsupervised Commonsense Question Answering with Self-TalkConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
Designing Precise and Robust Dialogue Response EvaluatorsAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
Asking and Answering Questions to Evaluate the Factual Consistency of
SummariesAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
Generating Counter Narratives against Online Hate Speech: Data and
StrategiesAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn
Knowledge-driven ConversationAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
TextGAIL: Generative Adversarial Imitation Learning for Text GenerationAAAI Conference on Artificial Intelligence (AAAI), 2020 |
"You are grounded!": Latent Name Artifacts in Pre-trained Language
ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
Evaluating the Evaluation of Diversity in Natural Language GenerationConference of the European Chapter of the Association for Computational Linguistics (EACL), 2020 |
Sparse Text GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
An Analysis of the Utility of Explicit Negative Examples to Improve the
Syntactic Abilities of Neural Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
Heavy-tailed Representations, Text Polarity Classification & Data
AugmentationNeural Information Processing Systems (NeurIPS), 2020 |
Efficient Content-Based Sparse Attention with Routing TransformersTransactions of the Association for Computational Linguistics (TACL), 2020 |
EmpTransfo: A Multi-head Transformer Architecture for Creating
Empathetic Dialog SystemsThe Florida AI Research Society (FLAIRS), 2020 |
PolyGen: An Autoregressive Generative Model of 3D MeshesInternational Conference on Machine Learning (ICML), 2020 |
Incremental Sampling Without Replacement for Sequence ModelsInternational Conference on Machine Learning (ICML), 2020 |
Limits of Detecting Text Generated by Large-Scale Language ModelsInformation Theory and Applications Workshop (ITA), 2020 |
Blank Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
Consistency of a Recurrent Language Model With Respect to Incomplete
DecodingConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
Variational Template Machine for Data-to-Text GenerationInternational Conference on Learning Representations (ICLR), 2020 |
Explaining Relationships Between Scientific DocumentsAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
Asking Questions the Human Way: Scalable Question-Answer Generation from
Text CorpusThe Web Conference (WWW), 2020 |
A Knowledge-Enhanced Pretraining Model for Commonsense Story GenerationTransactions of the Association for Computational Linguistics (TACL), 2020 |