ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.03097
  4. Cited By
To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning
  in Large Language Models

To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models

International Conference on Machine Learning (ICML), 2024
6 May 2024
George-Octavian Barbulescu
Peter Triantafillou
    MU
ArXiv (abs)PDFHTMLGithub

Papers citing "To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models"

13 / 13 papers shown
Ascent Fails to Forget
Ascent Fails to Forget
Ioannis Mavrothalassitis
Pol Puigdemont
Noam Itzhak Levi
Volkan Cevher
MU
244
2
0
30 Sep 2025
Erase or Hide? Suppressing Spurious Unlearning Neurons for Robust Unlearning
Erase or Hide? Suppressing Spurious Unlearning Neurons for Robust Unlearning
Nakyeong Yang
Dong-Kyum Kim
Jea Kwon
Minsung Kim
Kyomin Jung
M. Cha
MUKELM
157
1
0
26 Sep 2025
LoReUn: Data Itself Implicitly Provides Cues to Improve Machine Unlearning
LoReUn: Data Itself Implicitly Provides Cues to Improve Machine Unlearning
Xiang Li
Qianli Shen
Haonan Wang
Kenji Kawaguchi
MU
238
2
0
30 Jul 2025
Memorization Sinks: Isolating Memorization during LLM Training
Memorization Sinks: Isolating Memorization during LLM Training
Gaurav R. Ghosal
Pratyush Maini
Aditi Raghunathan
MU
315
6
0
14 Jul 2025
Learning-Time Encoding Shapes Unlearning in LLMs
Learning-Time Encoding Shapes Unlearning in LLMs
Ruihan Wu
Konstantin Garov
Kamalika Chaudhuri
MU
263
0
0
18 Jun 2025
SoK: Machine Unlearning for Large Language Models
SoK: Machine Unlearning for Large Language Models
Jie Ren
Yue Xing
Yingqian Cui
Charu C. Aggarwal
Hui Liu
MU
224
7
0
10 Jun 2025
Distillation Robustifies Unlearning
Distillation Robustifies Unlearning
Bruce W. Lee
Addie Foote
Alex Infanger
Leni Shor
Harish Kamath
Jacob Goldman-Wetzler
Bryce Woodworth
Alex Cloud
Alexander Matt Turner
MU
536
8
0
06 Jun 2025
Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning
Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning
Changsheng Wang
Yihua Zhang
Jinghan Jia
Parikshit Ram
Dennis L. Wei
Yuguang Yao
Soumyadeep Pal
Nathalie Baracaldo
Sijia Liu
MU
301
13
0
02 Jun 2025
Not All Data Are Unlearned Equally
Not All Data Are Unlearned Equally
Aravind Krishnan
Siva Reddy
Marius Mosbach
MU
975
8
0
07 Apr 2025
Unlearning in- vs. out-of-distribution data in LLMs under gradient-based
  method
Unlearning in- vs. out-of-distribution data in LLMs under gradient-based method
Teodora Baluta
Pascal Lamblin
Daniel Tarlow
Fabian Pedregosa
Gintare Karolina Dziugaite
MU
285
4
0
07 Nov 2024
Mitigating Memorization In Language Models
Mitigating Memorization In Language Models
Mansi Sakarvadia
Aswathy Ajith
Arham Khan
Nathaniel Hudson
Caleb Geniesse
Kyle Chard
Yaoqing Yang
Ian Foster
Michael W. Mahoney
KELMMU
468
11
0
03 Oct 2024
Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Pratiksha Thaker
Shengyuan Hu
Neil Kale
Yash Maurya
Zhiwei Steven Wu
Virginia Smith
MU
422
48
0
03 Oct 2024
MU-Bench: A Multitask Multimodal Benchmark for Machine Unlearning
MU-Bench: A Multitask Multimodal Benchmark for Machine Unlearning
Jiali Cheng
Hadi Amiri
BDL
400
11
0
21 Jun 2024
1
Page 1 of 1