LLMEval: A Preliminary Study on How to Evaluate Large Language ModelsAAAI Conference on Artificial Intelligence (AAAI), 2023 |
Qwen Technical Report Jinze Bai Shuai Bai Yunfei Chu Zeyu Cui Kai Dang ...Zhenru Zhang Chang Zhou Jingren Zhou Xiaohuan Zhou Tianhang Zhu |
Cross-Lingual Knowledge Editing in Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023 |
Evaluating the Ripple Effects of Knowledge Editing in Language ModelsTransactions of the Association for Computational Linguistics (TACL), 2023 |
Llama 2: Open Foundation and Fine-Tuned Chat Models Hugo Touvron Louis Martin Kevin R. Stone Peter Albert Amjad Almahairi ...Sharan Narang Aurelien Rodriguez Robert Stojnic Sergey Edunov Thomas Scialom |
CMMLU: Measuring massive multitask language understanding in ChineseAnnual Meeting of the Association for Computational Linguistics (ACL), 2023 |
MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop
QuestionsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
Editing Large Language Models: Problems, Methods, and OpportunitiesConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for
Foundation ModelsNeural Information Processing Systems (NeurIPS), 2023 |
Transformer-Patcher: One Mistake worth One NeuronInternational Conference on Learning Representations (ICLR), 2023 |
Does Localization Inform Editing? Surprising Differences in
Causality-Based Localization vs. Knowledge Editing in Language ModelsNeural Information Processing Systems (NeurIPS), 2023 |
Mass-Editing Memory in a TransformerInternational Conference on Learning Representations (ICLR), 2022 |
Calibrating Factual Knowledge in Pretrained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Memory-Based Model Editing at ScaleInternational Conference on Machine Learning (ICML), 2022 |
Locating and Editing Factual Associations in GPTNeural Information Processing Systems (NeurIPS), 2022 |
Fast Model Editing at ScaleInternational Conference on Learning Representations (ICLR), 2021 |
Editing Factual Knowledge in Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
Editable Neural NetworksInternational Conference on Learning Representations (ICLR), 2020 |
Language Models as Knowledge Bases?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019 |
Zero-Shot Relation Extraction via Reading ComprehensionConference on Computational Natural Language Learning (CoNLL), 2017 |