ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19207
55
0

FaithUn: Toward Faithful Forgetting in Language Models by Investigating the Interconnectedness of Knowledge

26 February 2025
Nakyeong Yang
Minsung Kim
Seunghyun Yoon
Joongbo Shin
Kyomin Jung
    KELM
    MU
ArXivPDFHTML
Abstract

Various studies have attempted to remove sensitive or private knowledge from a language model to prevent its unauthorized exposure. However, prior studies have overlooked the complex and interconnected nature of knowledge, where related knowledge must be carefully examined. Specifically, they have failed to evaluate whether an unlearning method faithfully erases interconnected knowledge that should be removed, retaining knowledge that appears relevant but exists in a completely different context. To resolve this problem, we first define a new concept called superficial unlearning, which refers to the phenomenon where an unlearning method either fails to erase the interconnected knowledge it should remove or unintentionally erases irrelevant knowledge. Based on the definition, we introduce a new benchmark, FaithUn, to analyze and evaluate the faithfulness of unlearning in real-world knowledge QA settings. Furthermore, we propose a novel unlearning method, KLUE, which updates only knowledge-related neurons to achieve faithful unlearning. KLUE identifies knowledge neurons using an explainability method and updates only those neurons using selected unforgotten samples. Experimental results demonstrate that widely-used unlearning methods fail to ensure faithful unlearning, while our method shows significant effectiveness in real-world QA unlearning.

View on arXiv
@article{yang2025_2502.19207,
  title={ FaithUn: Toward Faithful Forgetting in Language Models by Investigating the Interconnectedness of Knowledge },
  author={ Nakyeong Yang and Minsung Kim and Seunghyun Yoon and Joongbo Shin and Kyomin Jung },
  journal={arXiv preprint arXiv:2502.19207},
  year={ 2025 }
}
Comments on this paper