ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.07462
109
2
v1v2v3 (latest)

Global Error Bounds and Linear Convergence for Gradient-Based Algorithms for Trend Filtering and ℓ1\ell_{1}ℓ1​-Convex Clustering

16 April 2019
Nhat Ho
Tianyi Lin
Michael I. Jordan
ArXiv (abs)PDFHTML
Abstract

We propose a class of first-order gradient-type optimization algorithms to solve structured \textit{filtering-clustering problems}, a class of problems which include trend filtering and ℓ1\ell_1ℓ1​-convex clustering as special cases. Our first main result establishes the linear convergence of deterministic gradient-type algorithms despite the extreme ill-conditioning of the difference operator matrices in these problems. This convergence result is based on a convex-concave saddle point formulation of filtering-clustering problems and the fact that the dual form of the problem admits a global error bound, a result which is based on the celebrated Hoffman bound for the distance between a point and its projection onto an optimal set. The linear convergence rate also holds for stochastic variance reduction gradient-type algorithms. Finally, we present empirical results to show that the algorithms that we analyze perform comparable to state-of-the-art algorithms for trend filtering, while presenting advantages for scalability.

View on arXiv
Comments on this paper