Characterizing the implicit bias via a primal-dual analysis
This paper shows that the implicit bias of gradient descent on linearly separable data is exactly characterized by the optimal solution of a dual optimization problem given by a smoothed margin, even for general losses. This is in contrast to prior results, which are often tailored to exponentially-tailed losses. For the exponential loss specifically, with training examples and gradient descent steps, our dual analysis further allows us to prove an convergence rate to the maximum margin direction, when a constant step size is used. This rate is tight in both and , which has not been presented by prior work. On the other hand, with a properly chosen but aggressive step size schedule, we prove an convergence rate for margin maximization, while prior work has only proved an rate, or an convergence rate to a suboptimal margin. Our key observations include that gradient descent on the primal variable naturally induces a mirror descent update on the dual variable, and that the dual objective in this setting is smooth enough to give a faster rate.
View on arXiv