ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.13394
84
6

Magnitude and Angle Dynamics in Training Single ReLU Neurons

27 September 2022
Sangmin Lee
Byeongsu Sim
Jong Chul Ye
    MLT
ArXivPDFHTML
Abstract

To understand learning the dynamics of deep ReLU networks, we investigate the dynamic system of gradient flow w(t)w(t)w(t) by decomposing it to magnitude w(t)w(t)w(t) and angle ϕ(t):=π−θ(t)\phi(t):= \pi - \theta(t) ϕ(t):=π−θ(t) components. In particular, for multi-layer single ReLU neurons with spherically symmetric data distribution and the square loss function, we provide upper and lower bounds for magnitude and angle components to describe the dynamics of gradient flow. Using the obtained bounds, we conclude that small scale initialization induces slow convergence speed for deep single ReLU neurons. Finally, by exploiting the relation of gradient flow and gradient descent, we extend our results to the gradient descent approach. All theoretical results are verified by experiments.

View on arXiv
Comments on this paper