11
4

Locate-then-edit for Multi-hop Factual Recall under Knowledge Editing

Abstract

The locate-then-edit paradigm has shown significant promise for knowledge editing (KE) in Large Language Models (LLMs). While previous methods perform well on single-hop fact recall tasks, they consistently struggle with multi-hop factual recall tasks involving newly edited knowledge. In this paper, leveraging tools in mechanistic interpretability, we first identify that in multi-hop tasks, LLMs tend to retrieve knowledge with implicit subject information from deeper MLP layers, unlike single-hop tasks, which rely on shallow layers. This distinction explains the poor performance of current methods in multi-hop queries, as they primarily focus on editing shallow layers with single-hop edit prompts, leaving deeper layers unchanged. To address this, we propose IFMET, a novel locate-then-edit KE approach designed to edit both shallow and deep MLP layers. Beyond single-hop editing prompts, IFMET further incorporates multi-hop editing prompts to locate and modify knowledge across different stages of reasoning. Experimental results demonstrate that IFMET significantly improves performance on multi-hop factual recall tasks, overcoming the limitations of previous locate-then-edit methods

View on arXiv
@article{zhang2025_2410.06331,
  title={ Locate-then-edit for Multi-hop Factual Recall under Knowledge Editing },
  author={ Zhuoran Zhang and Yongxiang Li and Zijian Kan and Keyuan Cheng and Lijie Hu and Di Wang },
  journal={arXiv preprint arXiv:2410.06331},
  year={ 2025 }
}
Comments on this paper