22
0

PULSE: Practical Evaluation Scenarios for Large Multimodal Model Unlearning

Tatsuki Kawakami
Kazuki Egashira
Atsuyuki Miyai
Go Irie
Kiyoharu Aizawa
Main:7 Pages
5 Figures
Bibliography:1 Pages
3 Tables
Abstract

In recent years, unlearning techniques, which are methods for inducing a model to "forget" previously learned information, have attracted attention as a way to address privacy and copyright concerns in large language models (LLMs) and large multimodal models (LMMs). While several unlearning benchmarks have been established for LLMs, a practical evaluation framework for unlearning in LMMs has been less explored. Specifically, existing unlearning benchmark for LMMs considers only scenarios in which the model is required to unlearn fine-tuned knowledge through a single unlearning operation. In this study, we introduce PULSE protocol for realistic unlearning scenarios for LMMs by introducing two critical perspectives: (i) Pre-trained knowledge Unlearning for analyzing the effect across different knowledge acquisition phases and (ii) Long-term Sustainability Evaluation to address sequential requests. We then evaluate existing unlearning methods along these dimensions. Our results reveal that, although some techniques can successfully unlearn knowledge acquired through fine-tuning, they struggle to eliminate information learned during pre-training. Moreover, methods that effectively unlearn a batch of target data in a single operation exhibit substantial performance degradation when the same data are split and unlearned sequentially.

View on arXiv
@article{kawakami2025_2507.01271,
  title={ PULSE: Practical Evaluation Scenarios for Large Multimodal Model Unlearning },
  author={ Tatsuki Kawakami and Kazuki Egashira and Atsuyuki Miyai and Go Irie and Kiyoharu Aizawa },
  journal={arXiv preprint arXiv:2507.01271},
  year={ 2025 }
}
Comments on this paper