160

Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime

International Conference on Machine Learning (ICML), 2023
Abstract

We consider online learning problems in the realizable setting, where there is a zero-loss solution, and propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds. For the problem of online prediction from experts, we design new algorithms that obtain near-optimal regret O(ε1log1.5d){O} \big( \varepsilon^{-1} \log^{1.5}{d} \big) where dd is the number of experts. This significantly improves over the best existing regret bounds for the DP non-realizable setting which are O(ε1min{d,T1/3logd}){O} \big( \varepsilon^{-1} \min\big\{d, T^{1/3}\log d\big\} \big). We also develop an adaptive algorithm for the small-loss setting with regret O(Llogd+ε1log1.5d)O(L^\star\log d + \varepsilon^{-1} \log^{1.5}{d}) where LL^\star is the total loss of the best expert. Additionally, we consider DP online convex optimization in the realizable setting and propose an algorithm with near-optimal regret O(ε1d1.5)O \big(\varepsilon^{-1} d^{1.5} \big), as well as an algorithm for the smooth case with regret O(ε2/3(dT)1/3)O \big( \varepsilon^{-2/3} (dT)^{1/3} \big), both significantly improving over existing bounds in the non-realizable regime.

View on arXiv
Comments on this paper