ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08164
  4. Cited By
Editing Factual Knowledge in Language Models
v1v2 (latest)

Editing Factual Knowledge in Language Models

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
16 April 2021
Nicola De Cao
Wilker Aziz
Ivan Titov
    KELM
ArXiv (abs)PDFHTMLGithub (138★)

Papers citing "Editing Factual Knowledge in Language Models"

36 / 436 papers shown
Revision Transformers: Instructing Language Models to Change their
  Values
Revision Transformers: Instructing Language Models to Change their ValuesEuropean Conference on Artificial Intelligence (ECAI), 2022
Felix Friedrich
Wolfgang Stammer
P. Schramowski
Kristian Kersting
KELM
265
11
0
19 Oct 2022
Prompting GPT-3 To Be Reliable
Prompting GPT-3 To Be ReliableInternational Conference on Learning Representations (ICLR), 2022
Chenglei Si
Zhe Gan
Zhengyuan Yang
Shuohang Wang
Jianfeng Wang
Jordan L. Boyd-Graber
Lijuan Wang
KELMLRM
414
343
0
17 Oct 2022
Language Generation Models Can Cause Harm: So What Can We Do About It?
  An Actionable Survey
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable SurveyConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
452
106
0
14 Oct 2022
Is It Worth the (Environmental) Cost? Limited Evidence for Temporal
  Adaptation via Continuous Training
Is It Worth the (Environmental) Cost? Limited Evidence for Temporal Adaptation via Continuous Training
Giuseppe Attanasio
Debora Nozza
Federico Bianchi
Dirk Hovy
CLL
211
3
0
13 Oct 2022
Mass-Editing Memory in a Transformer
Mass-Editing Memory in a TransformerInternational Conference on Learning Representations (ICLR), 2022
Kevin Meng
Arnab Sen Sharma
A. Andonian
Yonatan Belinkov
David Bau
KELMVLM
437
797
0
13 Oct 2022
Can Pretrained Language Models (Yet) Reason Deductively?
Can Pretrained Language Models (Yet) Reason Deductively?Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Moy Yuan
Songbo Hu
Ivan Vulić
Anna Korhonen
Zaiqiao Meng
ReLMELMLRM
205
10
0
12 Oct 2022
Calibrating Factual Knowledge in Pretrained Language Models
Calibrating Factual Knowledge in Pretrained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Qingxiu Dong
Damai Dai
Yifan Song
Jingjing Xu
Zhifang Sui
Lei Li
KELM
900
102
0
07 Oct 2022
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained ModelInternational Conference on Learning Representations (ICLR), 2022
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng Zhang
Yuxiao Dong
Jie Tang
BDLLRM
805
1,221
0
05 Oct 2022
Learning by Distilling Context
Learning by Distilling Context
Charles Burton Snell
Dan Klein
Ruiqi Zhong
ReLMLRM
615
62
0
30 Sep 2022
Patching open-vocabulary models by interpolating weights
Patching open-vocabulary models by interpolating weightsNeural Information Processing Systems (NeurIPS), 2022
Gabriel Ilharco
Mitchell Wortsman
S. Gadre
Shuran Song
Hannaneh Hajishirzi
Simon Kornblith
Ali Farhadi
Ludwig Schmidt
VLMKELM
355
202
0
10 Aug 2022
Repairing Neural Networks by Leaving the Right Past Behind
Repairing Neural Networks by Leaving the Right Past BehindNeural Information Processing Systems (NeurIPS), 2022
Ryutaro Tanno
Melanie F. Pradier
A. Nori
Yingzhen Li
KELM
338
37
0
11 Jul 2022
BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
  Pretrained Language Models
BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from Pretrained Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Shibo Hao
Bowen Tan
Kaiwen Tang
Bin Ni
Xiyan Shao
Hengzhe Zhang
Eric Xing
Zhiting Hu
336
35
0
28 Jun 2022
Memory-Based Model Editing at Scale
Memory-Based Model Editing at ScaleInternational Conference on Machine Learning (ICML), 2022
E. Mitchell
Charles Lin
Antoine Bosselut
Christopher D. Manning
Chelsea Finn
KELM
372
465
0
13 Jun 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck ModelsInternational Conference on Learning Representations (ICLR), 2022
Mert Yuksekgonul
Maggie Wang
James Zou
452
256
0
31 May 2022
Language Anisotropic Cross-Lingual Model Editing
Language Anisotropic Cross-Lingual Model EditingAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Yang Xu
Yutai Hou
Wanxiang Che
Min Zhang
KELM
209
31
0
25 May 2022
Entity Cloze By Date: What LMs Know About Unseen Entities
Entity Cloze By Date: What LMs Know About Unseen Entities
Yasumasa Onoe
Michael J.Q. Zhang
Eunsol Choi
Greg Durrett
KELM
266
67
0
05 May 2022
On Continual Model Refinement in Out-of-Distribution Data Streams
On Continual Model Refinement in Out-of-Distribution Data StreamsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Bill Yuchen Lin
Sida I. Wang
Xi Lin
Robin Jia
Lin Xiao
Xiang Ren
Anuj Kumar
CLL
216
31
0
04 May 2022
Meta Learning for Natural Language Processing: A Survey
Meta Learning for Natural Language Processing: A SurveyNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Hung-yi Lee
Shang-Wen Li
Ngoc Thang Vu
349
52
0
03 May 2022
Towards Teachable Reasoning Systems: Using a Dynamic Memory of User
  Feedback for Continual System Improvement
Towards Teachable Reasoning Systems: Using a Dynamic Memory of User Feedback for Continual System ImprovementConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Bhavana Dalvi
Oyvind Tafjord
Peter Clark
LRMKELMReLM
289
47
0
27 Apr 2022
Plug-and-Play Adaptation for Continuously-updated QA
Plug-and-Play Adaptation for Continuously-updated QAFindings (Findings), 2022
Kyungjae Lee
Wookje Han
Seung-won Hwang
Hwaran Lee
Joonsuk Park
Sang-Woo Lee
KELM
211
23
0
27 Apr 2022
VQGAN-CLIP: Open Domain Image Generation and Editing with Natural
  Language Guidance
VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language GuidanceEuropean Conference on Computer Vision (ECCV), 2022
Katherine Crowson
Stella Biderman
Daniel Kornis
Dashiell Stander
Eric Hallahan
Louis Castricato
Edward Raff
CLIP
493
444
0
18 Apr 2022
Fast Few-shot Debugging for NLU Test Suites
Fast Few-shot Debugging for NLU Test SuitesWorkshop on Knowledge Extraction and Integration for Deep Learning Architectures; Deep Learning Inside Out (DeeLIO), 2022
Christopher Malon
Kai Li
E. Kruus
112
5
0
13 Apr 2022
A Review on Language Models as Knowledge Bases
A Review on Language Models as Knowledge Bases
Badr AlKhamissi
Millicent Li
Asli Celikyilmaz
Mona T. Diab
Marjan Ghazvininejad
KELM
315
210
0
12 Apr 2022
Language Models that Seek for Knowledge: Modular Search & Generation for
  Dialogue and Prompt Completion
Language Models that Seek for Knowledge: Modular Search & Generation for Dialogue and Prompt CompletionConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Kurt Shuster
M. Komeili
Leonard Adolphs
Stephen Roller
Arthur Szlam
Jason Weston
KELM
232
143
0
24 Mar 2022
Retrieval Augmented Classification for Long-Tail Visual Recognition
Retrieval Augmented Classification for Long-Tail Visual RecognitionComputer Vision and Pattern Recognition (CVPR), 2022
Alex Long
Wei Yin
Thalaiyasingam Ajanthan
Vu-Linh Nguyen
Pulak Purkait
Ravi Garg
Alan Blair
Chunhua Shen
Anton Van Den Hengel
169
118
0
22 Feb 2022
Locating and Editing Factual Associations in GPT
Locating and Editing Factual Associations in GPTNeural Information Processing Systems (NeurIPS), 2022
Kevin Meng
David Bau
A. Andonian
Yonatan Belinkov
KELM
1.0K
1,972
0
10 Feb 2022
Memory-assisted prompt editing to improve GPT-3 after deployment
Memory-assisted prompt editing to improve GPT-3 after deploymentConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Aman Madaan
Niket Tandon
Peter Clark
Yiming Yang
KELM
455
0
0
16 Jan 2022
Does Pre-training Induce Systematic Inference? How Masked Language
  Models Acquire Commonsense Knowledge
Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge
Ian Porada
Alessandro Sordoni
Jackie C.K. Cheung
175
9
0
16 Dec 2021
Editing a classifier by rewriting its prediction rules
Editing a classifier by rewriting its prediction rules
Shibani Santurkar
Dimitris Tsipras
Mahalaxmi Elango
David Bau
Antonio Torralba
Aleksander Madry
KELM
390
98
0
02 Dec 2021
Do Language Models Have Beliefs? Methods for Detecting, Updating, and
  Visualizing Model Beliefs
Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs
Peter Hase
Mona T. Diab
Asli Celikyilmaz
Xian Li
Zornitsa Kozareva
Veselin Stoyanov
Joey Tianyi Zhou
Srini Iyer
KELMLRM
184
89
0
26 Nov 2021
Fast Model Editing at Scale
Fast Model Editing at ScaleInternational Conference on Learning Representations (ICLR), 2021
E. Mitchell
Charles Lin
Antoine Bosselut
Chelsea Finn
Christopher D. Manning
KELM
1.0K
469
0
21 Oct 2021
Language Models As or For Knowledge Bases
Language Models As or For Knowledge Bases
Simon Razniewski
Andrew Yates
Nora Kassner
Gerhard Weikum
KELM
186
1
0
10 Oct 2021
Towards Continual Knowledge Learning of Language Models
Towards Continual Knowledge Learning of Language Models
Joel Jang
Seonghyeon Ye
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Stanley Jungkyu Choi
Minjoon Seo
CLLKELM
612
186
0
07 Oct 2021
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Peng Li
Maosong Sun
Jie Zhou
MoE
429
164
0
05 Oct 2021
Time-Aware Language Models as Temporal Knowledge Bases
Time-Aware Language Models as Temporal Knowledge BasesTransactions of the Association for Computational Linguistics (TACL), 2021
Bhuwan Dhingra
Jeremy R. Cole
Julian Martin Eisenschlos
D. Gillick
Jacob Eisenstein
William W. Cohen
KELM
479
332
0
29 Jun 2021
Mind the Gap: Assessing Temporal Generalization in Neural Language
  Models
Mind the Gap: Assessing Temporal Generalization in Neural Language ModelsNeural Information Processing Systems (NeurIPS), 2021
Angeliki Lazaridou
A. Kuncoro
E. Gribovskaya
Devang Agrawal
Adam Liska
...
Sebastian Ruder
Dani Yogatama
Kris Cao
Susannah Young
Phil Blunsom
VLM
446
251
0
03 Feb 2021
Previous
123456789
Page 9 of 9