ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.09880
60
27
v1v2v3v4 (latest)

A Fixed-Point of View on Gradient Methods for Big Data

29 June 2017
A. Jung
ArXiv (abs)PDFHTML
Abstract

Using their interpretation as fixed-point iterations, we review first order gradient methods for minimizing convex objective functions. Due to their conceptual and algorithmic simplicity, first order gradient methods are widely used in machine learning methods involving massive datasets. In particular, stochastic first order methods are considered the de-facto standard for training deep neural networks. By studying these methods within fixed-point theory provides us with powerful tools to study the convergence properties of a wide range of gradient methods. In particular, first order methods using inexact or noisy gradients, such as in stochastic gradient descent, can be studied using well-known results on inexact fixed-point iterations. Moreover, as illustrated clearly in this paper, the fixed-point picture allows an elegant derivation of accelerations for basic gradient methods. In particular, we show how gradient descent can be accelerated by an fixed- point preserving transformation of an operator associated with the objective function.

View on arXiv
Comments on this paper