
Title |
|---|
![]() Fine-tuning Language Models for FactualityInternational Conference on Learning Representations (ICLR), 2023 |
![]() Evaluating the Knowledge Base Completion Potential of GPTConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
![]() Language Models Hallucinate, but May Excel at Fact VerificationNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023 |
![]() Quantifying Language Models' Sensitivity to Spurious Features in Prompt
Design or: How I learned to start worrying about prompt formattingInternational Conference on Learning Representations (ICLR), 2023 |
![]() Head-to-Tail: How Knowledgeable are Large Language Models (LLMs)? A.K.A.
Will LLMs Replace Knowledge Graphs?North American Chapter of the Association for Computational Linguistics (NAACL), 2023 |
![]() Extracting Multi-valued Relations from Language ModelsWorkshop on Representation Learning for NLP (RepL4NLP), 2023 |
![]() FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long
Form Text GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 Sewon Min Kalpesh Krishna Xinxi Lyu M. Lewis Anuj Kumar Pang Wei Koh Mohit Iyyer Luke Zettlemoyer Hannaneh Hajishirzi |
![]() Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
![]() In-Context Retrieval-Augmented Language ModelsTransactions of the Association for Computational Linguistics (TACL), 2023 |
![]() REPLUG: Retrieval-Augmented Black-Box Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023 |
![]() Large Language Models Struggle to Learn Long-Tail KnowledgeInternational Conference on Machine Learning (ICML), 2022 |
![]() Fantastically Ordered Prompts and Where to Find Them: Overcoming
Few-Shot Prompt Order SensitivityAnnual Meeting of the Association for Computational Linguistics (ACL), 2021 |
![]() Documenting Large Webtext Corpora: A Case Study on the Colossal Clean
Crawled CorpusConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
![]() Editing Factual Knowledge in Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
![]() KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
for Relation ExtractionThe Web Conference (WWW), 2021 |
![]() Learning How to Ask: Querying LMs with Mixtures of Soft PromptsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
![]() Factual Probing Is [MASK]: Learning vs. Learning to RecallNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
![]() Global-to-Local Neural Networks for Document-Level Relation ExtractionConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
![]() Language Models are Few-Shot LearnersNeural Information Processing Systems (NeurIPS), 2020 |
![]() How Much Knowledge Can You Pack Into the Parameters of a Language Model?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
![]() REALM: Retrieval-Augmented Language Model Pre-TrainingInternational Conference on Machine Learning (ICML), 2020 |
![]() How Can We Know What Language Models Know?Transactions of the Association for Computational Linguistics (TACL), 2019 |
![]() Compressive Transformers for Long-Range Sequence ModellingInternational Conference on Learning Representations (ICLR), 2019 |
![]() E-BERT: Efficient-Yet-Effective Entity Embeddings for BERTFindings (Findings), 2019 |
![]() Language Models as Knowledge Bases?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019 |
![]() DocRED: A Large-Scale Document-Level Relation Extraction DatasetAnnual Meeting of the Association for Computational Linguistics (ACL), 2019 |