ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15826
40
0

CoME: An Unlearning-based Approach to Conflict-free Model Editing

20 February 2025
Dahyun Jung
Jaehyung Seo
Jaewook Lee
Chanjun Park
Heuiseok Lim
    MU
    KELM
ArXivPDFHTML
Abstract

Large language models (LLMs) often retain outdated or incorrect information from pre-training, which undermines their reliability. While model editing methods have been developed to address such errors without full re-training, they frequently suffer from knowledge conflicts, where outdated information interferes with new knowledge. In this work, we propose Conflict-free Model Editing (CoME), a novel framework that enhances the accuracy of knowledge updates in LLMs by selectively removing outdated knowledge. CoME leverages unlearning to mitigate knowledge interference, allowing new information to be integrated without compromising relevant linguistic features. Through experiments on GPT-J and LLaMA-3 using Counterfact and ZsRE datasets, we demonstrate that CoME improves both editing accuracy and model reliability when applied to existing editing methods. Our results highlight that the targeted removal of outdated knowledge is crucial for enhancing model editing effectiveness and maintaining the model's generative performance.

View on arXiv
@article{jung2025_2502.15826,
  title={ CoME: An Unlearning-based Approach to Conflict-free Model Editing },
  author={ Dahyun Jung and Jaehyung Seo and Jaewook Lee and Chanjun Park and Heuiseok Lim },
  journal={arXiv preprint arXiv:2502.15826},
  year={ 2025 }
}
Comments on this paper