173
v1v2v3v4 (latest)

Differential Privacy in Continual Learning: Which Labels to Update?

Main:9 Pages
13 Figures
Bibliography:7 Pages
7 Tables
Appendix:23 Pages
Abstract

The goal of continual learning (CL) is to retain knowledge across tasks, but this conflicts with strict privacy required for sensitive training data that prevents storing or memorising individual samples. To address that, we combine CL and differential privacy (DP). We highlight that failing to account for privacy leakage through the set of labels a model can output can break the privacy of otherwise valid DP algorithms. This is especially relevant in CL. We show that mitigating the issue with a data-independent overly large label space can have minimal negative impact on utility when fine-tuning a pre-trained model under DP, while learning the labels with a separate DP mechanism risks losing small classes.

View on arXiv
Comments on this paper