573
v1v2v3v4 (latest)

CLEAR: Character Unlearning in Textual and Visual Modalities

Annual Meeting of the Association for Computational Linguistics (ACL), 2024
Main:9 Pages
7 Figures
Bibliography:3 Pages
7 Tables
Appendix:10 Pages
Abstract

Machine Unlearning (MU) is critical for removing private or hazardous information from deep learning models. While MU has advanced significantly in unimodal (text or vision) settings, multimodal unlearning (MMU) remains underexplored due to the lack of open benchmarks for evaluating cross-modal data removal. To address this gap, we introduce CLEAR, the first open-source benchmark designed specifically for MMU. CLEAR contains 200 fictitious individuals and 3,700 images linked with corresponding question-answer pairs, enabling a thorough evaluation across modalities. We conduct a comprehensive analysis of 11 MU methods (e.g., SCRUB, gradient ascent, DPO) across four evaluation sets, demonstrating that jointly unlearning both modalities outperforms single-modality approaches. The dataset is available atthis https URL

View on arXiv
Comments on this paper