17
0

Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning

Main:13 Pages
3 Figures
Bibliography:3 Pages
2 Tables
Appendix:22 Pages
Abstract

This paper studies the problem of differentially private empirical risk minimization (DP-ERM) for binary linear classification. We obtain an efficient (ε,δ)(\varepsilon,\delta)-DP algorithm with an empirical zero-one risk bound of O~(1γ2εn+Soutγn)\tilde{O}\left(\frac{1}{\gamma^2\varepsilon n} + \frac{|S_{\mathrm{out}}|}{\gamma n}\right) where nn is the number of data points, SoutS_{\mathrm{out}} is an arbitrary subset of data one can remove and γ\gamma is the margin of linear separation of the remaining data points (after SoutS_{\mathrm{out}} is removed). Here, O~()\tilde{O}(\cdot) hides only logarithmic terms. In the agnostic case, we improve the existing results when the number of outliers is small. Our algorithm is highly adaptive because it does not require knowing the margin parameter γ\gamma or outlier subset SoutS_{\mathrm{out}}. We also derive a utility bound for the advanced private hyperparameter tuning algorithm.

View on arXiv
@article{wang2025_2505.24737,
  title={ Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning },
  author={ Erchi Wang and Yuqing Zhu and Yu-Xiang Wang },
  journal={arXiv preprint arXiv:2505.24737},
  year={ 2025 }
}
Comments on this paper