ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.02760
32
5

Erasing Conceptual Knowledge from Language Models

3 October 2024
Rohit Gandikota
Sheridan Feucht
Samuel Marks
David Bau
    KELM
    ELM
    MU
ArXivPDFHTML
Abstract

In this work, we propose Erasure of Language Memory (ELM), an approach for concept-level unlearning built on the principle of matching the distribution defined by an introspective classifier. Our key insight is that effective unlearning should leverage the model's ability to evaluate its own knowledge, using the model itself as a classifier to identify and reduce the likelihood of generating content related to undesired concepts. ELM applies this framework to create targeted low-rank updates that reduce generation probabilities for concept-specific content while preserving the model's broader capabilities. We demonstrate ELM's efficacy on biosecurity, cybersecurity, and literary domain erasure tasks. Comparative analysis shows that ELM achieves superior performance across key metrics, including near-random scores on erased topic assessments, maintained coherence in text generation, preserved accuracy on unrelated benchmarks, and robustness under adversarial attacks. Our code, data, and trained models are available atthis https URL

View on arXiv
@article{gandikota2025_2410.02760,
  title={ Erasing Conceptual Knowledge from Language Models },
  author={ Rohit Gandikota and Sheridan Feucht and Samuel Marks and David Bau },
  journal={arXiv preprint arXiv:2410.02760},
  year={ 2025 }
}
Comments on this paper