8
3

Agnostic Learnability of Halfspaces via Logistic Loss

Abstract

We investigate approximation guarantees provided by logistic regression for the fundamental problem of agnostic learning of homogeneous halfspaces. Previously, for a certain broad class of "well-behaved" distributions on the examples, Diakonikolas et al. (2020) proved an Ω~(OPT)\tilde{\Omega}(\textrm{OPT}) lower bound, while Frei et al. (2021) proved an O~(OPT)\tilde{O}(\sqrt{\textrm{OPT}}) upper bound, where OPT\textrm{OPT} denotes the best zero-one/misclassification risk of a homogeneous halfspace. In this paper, we close this gap by constructing a well-behaved distribution such that the global minimizer of the logistic risk over this distribution only achieves Ω(OPT)\Omega(\sqrt{\textrm{OPT}}) misclassification risk, matching the upper bound in (Frei et al., 2021). On the other hand, we also show that if we impose a radial-Lipschitzness condition in addition to well-behaved-ness on the distribution, logistic regression on a ball of bounded radius reaches O~(OPT)\tilde{O}(\textrm{OPT}) misclassification risk. Our techniques also show for any well-behaved distribution, regardless of radial Lipschitzness, we can overcome the Ω(OPT)\Omega(\sqrt{\textrm{OPT}}) lower bound for logistic loss simply at the cost of one additional convex optimization step involving the hinge loss and attain O~(OPT)\tilde{O}(\textrm{OPT}) misclassification risk. This two-step convex optimization algorithm is simpler than previous methods obtaining this guarantee, all of which require solving O(log(1/OPT))O(\log(1/\textrm{OPT})) minimization problems.

View on arXiv
Comments on this paper