Revision Transformers: Instructing Language Models to Change their
ValuesEuropean Conference on Artificial Intelligence (ECAI), 2022 |
Prompting GPT-3 To Be ReliableInternational Conference on Learning Representations (ICLR), 2022 |
Language Generation Models Can Cause Harm: So What Can We Do About It?
An Actionable SurveyConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022 |
Mass-Editing Memory in a TransformerInternational Conference on Learning Representations (ICLR), 2022 |
Can Pretrained Language Models (Yet) Reason Deductively?Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2022 |
Calibrating Factual Knowledge in Pretrained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
GLM-130B: An Open Bilingual Pre-trained ModelInternational Conference on Learning Representations (ICLR), 2022 Aohan Zeng Xiao Liu Zhengxiao Du Zihan Wang Hanyu Lai ...Jidong Zhai Wenguang Chen Peng Zhang Yuxiao Dong Jie Tang |
Patching open-vocabulary models by interpolating weightsNeural Information Processing Systems (NeurIPS), 2022 |
Repairing Neural Networks by Leaving the Right Past BehindNeural Information Processing Systems (NeurIPS), 2022 |
BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
Memory-Based Model Editing at ScaleInternational Conference on Machine Learning (ICML), 2022 |
Post-hoc Concept Bottleneck ModelsInternational Conference on Learning Representations (ICLR), 2022 |
Language Anisotropic Cross-Lingual Model EditingAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
On Continual Model Refinement in Out-of-Distribution Data StreamsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
Meta Learning for Natural Language Processing: A SurveyNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
Towards Teachable Reasoning Systems: Using a Dynamic Memory of User
Feedback for Continual System ImprovementConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Plug-and-Play Adaptation for Continuously-updated QAFindings (Findings), 2022 |
VQGAN-CLIP: Open Domain Image Generation and Editing with Natural
Language GuidanceEuropean Conference on Computer Vision (ECCV), 2022 |
Fast Few-shot Debugging for NLU Test SuitesWorkshop on Knowledge Extraction and Integration for Deep Learning Architectures; Deep Learning Inside Out (DeeLIO), 2022 |
Language Models that Seek for Knowledge: Modular Search & Generation for
Dialogue and Prompt CompletionConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Retrieval Augmented Classification for Long-Tail Visual RecognitionComputer Vision and Pattern Recognition (CVPR), 2022 |
Locating and Editing Factual Associations in GPTNeural Information Processing Systems (NeurIPS), 2022 |
Memory-assisted prompt editing to improve GPT-3 after deploymentConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Fast Model Editing at ScaleInternational Conference on Learning Representations (ICLR), 2021 |
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts Zhengyan Zhang Yankai Lin Zhiyuan Liu Peng Li Maosong Sun Jie Zhou |
Time-Aware Language Models as Temporal Knowledge BasesTransactions of the Association for Computational Linguistics (TACL), 2021 |
Mind the Gap: Assessing Temporal Generalization in Neural Language
ModelsNeural Information Processing Systems (NeurIPS), 2021 |