ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.04540
18
19

Characterizing the implicit bias via a primal-dual analysis

11 June 2019
Ziwei Ji
Matus Telgarsky
ArXivPDFHTML
Abstract

This paper shows that the implicit bias of gradient descent on linearly separable data is exactly characterized by the optimal solution of a dual optimization problem given by a smoothed margin, even for general losses. This is in contrast to prior results, which are often tailored to exponentially-tailed losses. For the exponential loss specifically, with nnn training examples and ttt gradient descent steps, our dual analysis further allows us to prove an O(ln⁡(n)/ln⁡(t))O(\ln(n)/\ln(t))O(ln(n)/ln(t)) convergence rate to the ℓ2\ell_2ℓ2​ maximum margin direction, when a constant step size is used. This rate is tight in both nnn and ttt, which has not been presented by prior work. On the other hand, with a properly chosen but aggressive step size schedule, we prove O(1/t)O(1/t)O(1/t) rates for both ℓ2\ell_2ℓ2​ margin maximization and implicit bias, whereas prior work (including all first-order methods for the general hard-margin linear SVM problem) proved O~(1/t)\widetilde{O}(1/\sqrt{t})O(1/t​) margin rates, or O(1/t)O(1/t)O(1/t) margin rates to a suboptimal margin, with an implied (slower) bias rate. Our key observations include that gradient descent on the primal variable naturally induces a mirror descent update on the dual variable, and that the dual objective in this setting is smooth enough to give a faster rate.

View on arXiv
Comments on this paper