ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06339
  4. Cited By
Learning to Unlearn while Retaining: Combating Gradient Conflicts in Machine Unlearning
v1v2 (latest)

Learning to Unlearn while Retaining: Combating Gradient Conflicts in Machine Unlearning

8 March 2025
Gaurav Patel
Qiang Qiu
    MU
ArXiv (abs)PDFHTMLGithub

Papers citing "Learning to Unlearn while Retaining: Combating Gradient Conflicts in Machine Unlearning"

4 / 4 papers shown
Distill, Forget, Repeat: A Framework for Continual Unlearning in Text-to-Image Diffusion Models
Distill, Forget, Repeat: A Framework for Continual Unlearning in Text-to-Image Diffusion Models
Naveen George
Naoki Murata
Yuhta Takida
Konda Reddy Mopuri
Yuki Mitsufuji
DiffMMU
455
0
0
02 Dec 2025
Unleashing Uncertainty: Efficient Machine Unlearning for Generative AI
Unleashing Uncertainty: Efficient Machine Unlearning for Generative AI
Christoforos N. Spartalis
T. Semertzidis
P. Daras
Efstratios Gavves
153
0
0
28 Aug 2025
Coeff-Tuning: A Graph Filter Subspace View for Tuning Attention-Based Large Models
Coeff-Tuning: A Graph Filter Subspace View for Tuning Attention-Based Large ModelsComputer Vision and Pattern Recognition (CVPR), 2025
Zichen Miao
Wei Chen
Qiang Qiu
331
8
0
24 Mar 2025
Federated Unlearning with Gradient Descent and Conflict Mitigation
Federated Unlearning with Gradient Descent and Conflict Mitigation
Zibin Pan
Zhichao Wang
Chi Li
Kaiyan Zheng
Boqi Wang
Xiaoying Tang
Junhua Zhao
MUFedML
173
21
0
31 Dec 2024
1
Page 1 of 1