The Unlearning Sensitive Content from Large Language Models task aims to remove targeted datapoints from trained models while minimally affecting their general knowledge. In our work, we leverage parameter-efficient, gradient-based unlearning using low-rank (LoRA) adaptation and layer-focused fine-tuning. To further enhance unlearning effectiveness, we employ data chunking, splitting forget data into disjoint partitions and merging them with cyclically sampled retain samples at a pre-defined ratio. Our task-agnostic method achieves an outstanding forget-retain balance, ranking first on leaderboards and significantly outperforming baselines and competing systems.
View on arXiv@article{premptis2025_2503.02443, title={ AILS-NTUA at SemEval-2025 Task 4: Parameter-Efficient Unlearning for Large Language Models using Data Chunking }, author={ Iraklis Premptis and Maria Lymperaiou and Giorgos Filandrianos and Orfeas Menis Mastromichalakis and Athanasios Voulodimos and Giorgos Stamou }, journal={arXiv preprint arXiv:2503.02443}, year={ 2025 } }