ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.00595
57
39
v1v2 (latest)

Fast Margin Maximization via Dual Acceleration

1 July 2021
Ziwei Ji
Nathan Srebro
Matus Telgarsky
ArXiv (abs)PDFHTML
Abstract

We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of O~(1/t2)\widetilde{\mathcal{O}}(1/t^2)O(1/t2). This contrasts with a rate of O(1/log⁡(t))\mathcal{O}(1/\log(t))O(1/log(t)) for standard gradient descent, and O(1/t)\mathcal{O}(1/t)O(1/t) for normalized gradient descent. This momentum-based method is derived via the convex dual of the maximum-margin problem, and specifically by applying Nesterov acceleration to this dual, which manages to result in a simple and intuitive method in the primal. This dual view can also be used to derive a stochastic variant, which performs adaptive non-uniform sampling via the dual variables.

View on arXiv
Comments on this paper