81

On the Performance of Differentially Private Optimization with Heavy-Tail Class Imbalance

Qiaoyue Tang
Alain Zhiyanov
Mathias Lécuyer
Main:6 Pages
11 Figures
Bibliography:2 Pages
Appendix:8 Pages
Abstract

In this work, we analyze the optimization behaviour of common private learning optimization algorithms under heavy-tail class imbalanced distribution. We show that, in a stylized model, optimizing with Gradient Descent with differential privacy (DP-GD) suffers when learning low-frequency classes, whereas optimization algorithms that estimate second-order information do not. In particular, DP-AdamBC that removes the DP bias from estimating loss curvature is a crucial component to avoid the ill-condition caused by heavy-tail class imbalance, and empirically fits the data better with 8%\approx8\% and 5%\approx5\% increase in training accuracy when learning the least frequent classes on both controlled experiments and real data respectively.

View on arXiv
Comments on this paper