396

DP-RAFT: A Differentially Private Recipe for Accelerated Fine-Tuning

International Conference on Machine Learning (ICML), 2022
Main:9 Pages
20 Figures
Bibliography:6 Pages
21 Tables
Appendix:19 Pages
Abstract

A major direction in differentially private machine learning is differentially private fine-tuning: pretraining a model on a source of "public data" and transferring the extracted features to downstream tasks. This is an important setting because many industry deployments fine-tune publicly available feature extractors on proprietary data for downstream tasks. In this paper, we use features extracted from state-of-the-art open source models to solve benchmark tasks in computer vision and natural language processing using differentially private fine-tuning. Our key insight is that by accelerating training, we can quickly drive the model parameters to regions in parameter space where the impact of noise is minimized. In doing so, we recover the same performance as non-private fine-tuning for realistic values of epsilon in [0.01, 1.0] on benchmark image classification datasets including CIFAR100.

View on arXiv
Comments on this paper