ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09565
50
0

Global Convergence and Rich Feature Learning in LLL-Layer Infinite-Width Neural Networks under μμμP Parametrization

12 March 2025
Zixiang Chen
Greg Yang
Qingyue Zhao
Q. Gu
    MLT
ArXivPDFHTML
Abstract

Despite deep neural networks' powerful representation learning capabilities, theoretical understanding of how networks can simultaneously achieve meaningful feature learning and global convergence remains elusive. Existing approaches like the neural tangent kernel (NTK) are limited because features stay close to their initialization in this parametrization, leaving open questions about feature properties during substantial evolution. In this paper, we investigate the training dynamics of infinitely wide, LLL-layer neural networks using the tensor program (TP) framework. Specifically, we show that, when trained with stochastic gradient descent (SGD) under the Maximal Update parametrization (μ\muμP) and mild conditions on the activation function, SGD enables these networks to learn linearly independent features that substantially deviate from their initial values. This rich feature space captures relevant data information and ensures that any convergent point of the training process is a global minimum. Our analysis leverages both the interactions among features across layers and the properties of Gaussian random variables, providing new insights into deep representation learning. We further validate our theoretical findings through experiments on real-world datasets.

View on arXiv
@article{chen2025_2503.09565,
  title={ Global Convergence and Rich Feature Learning in $L$-Layer Infinite-Width Neural Networks under $μ$P Parametrization },
  author={ Zixiang Chen and Greg Yang and Qingyue Zhao and Quanquan Gu },
  journal={arXiv preprint arXiv:2503.09565},
  year={ 2025 }
}
Comments on this paper