ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.04388
  4. Cited By
Unlearning in- vs. out-of-distribution data in LLMs under gradient-based
  method

Unlearning in- vs. out-of-distribution data in LLMs under gradient-based method

7 November 2024
Teodora Baluta
Pascal Lamblin
Daniel Tarlow
Fabian Pedregosa
Gintare Karolina Dziugaite
    MU
ArXivPDFHTML

Papers citing "Unlearning in- vs. out-of-distribution data in LLMs under gradient-based method"

1 / 1 papers shown
Title
ReLearn: Unlearning via Learning for Large Language Models
ReLearn: Unlearning via Learning for Large Language Models
Haoming Xu
Ningyuan Zhao
Liming Yang
Sendong Zhao
Shumin Deng
Mengru Wang
Bryan Hooi
Nay Oo
H. Chen
N. Zhang
KELM
CLL
MU
115
0
0
16 Feb 2025
1