Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning

Main:13 Pages
3 Figures
Bibliography:3 Pages
2 Tables
Appendix:22 Pages
Abstract
This paper studies the problem of differentially private empirical risk minimization (DP-ERM) for binary linear classification. We obtain an efficient -DP algorithm with an empirical zero-one risk bound of where is the number of data points, is an arbitrary subset of data one can remove and is the margin of linear separation of the remaining data points (after is removed). Here, hides only logarithmic terms. In the agnostic case, we improve the existing results when the number of outliers is small. Our algorithm is highly adaptive because it does not require knowing the margin parameter or outlier subset . We also derive a utility bound for the advanced private hyperparameter tuning algorithm.
View on arXiv@article{wang2025_2505.24737, title={ Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning }, author={ Erchi Wang and Yuqing Zhu and Yu-Xiang Wang }, journal={arXiv preprint arXiv:2505.24737}, year={ 2025 } }
Comments on this paper