321
v1v2v3 (latest)

Massive Editing for Large Language Models Based on Dynamic Weight Generation

Wentao Wan
Qiqing Lao
Zhiwei Xie
Hefeng Wu
Runnan Lin
Liang Lin
Keze Wang
Main:10 Pages
10 Figures
Bibliography:4 Pages
25 Tables
Appendix:13 Pages
Abstract

Knowledge Editing (KE) is a field that studies how to modify some knowledge in Large Language Models (LLMs) at a low cost (compared to pre-training). Currently, performing large-scale edits on LLMs while ensuring the Reliability, Generality, and Locality metrics of the edits remain a challenge. This paper proposes a Massive editing approach for LLMs based on dynamic weight Generation (MeG). Our MeG involves attaching a dynamic weight neuron to specific layers of the LLMs and using a diffusion model to conditionally generate the weights of this neuron based on the input query required for the knowledge. This allows the use of adding a single dynamic weight neuron to achieve the goal of large-scale knowledge editing. Experiments show that our MeG can significantly improve the performance of large-scale KE in terms of Reliability, Generality, and Locality metrics compared to existing knowledge editing methods, particularly with a high percentage point increase in the absolute value index for the Locality metric, demonstrating the advantages of our proposed method.

View on arXiv
Comments on this paper