Layered Unlearning for Adversarial Relearning

Our goal is to understand how post-training methods, such as fine-tuning, alignment, and unlearning, modify language model behavior and representations. We are particularly interested in the brittle nature of these modifications that makes them easy to bypass through prompt engineering or relearning. Recent results suggest that post-training induces shallow context-dependent ``circuits'' that suppress specific response patterns. This could be one explanation for the brittleness of post-training. To test this hypothesis, we design an unlearning algorithm, Layered Unlearning (LU), that creates distinct inhibitory mechanisms for a growing subset of the data. By unlearning the first folds while retaining the remaining at the th of stages, LU limits the ability of relearning on a subset of data to recover the full dataset. We evaluate LU through a combination of synthetic and large language model (LLM) experiments. We find that LU improves robustness to adversarial relearning for several different unlearning methods. Our results contribute to the state-of-the-art of machine unlearning and provide insight into the effect of post-training updates.
View on arXiv@article{qian2025_2505.09500, title={ Layered Unlearning for Adversarial Relearning }, author={ Timothy Qian and Vinith Suriyakumar and Ashia Wilson and Dylan Hadfield-Menell }, journal={arXiv preprint arXiv:2505.09500}, year={ 2025 } }