ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.07463
  4. Cited By
Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with
  Linear Widths

Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with Linear Widths

16 May 2022
Tianxiang Gao
Hongyang Gao
    MLT
ArXivPDFHTML

Papers citing "Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with Linear Widths"

3 / 3 papers shown
Title
Global Convergence Rate of Deep Equilibrium Models with General Activations
Global Convergence Rate of Deep Equilibrium Models with General Activations
Lan V. Truong
37
2
0
11 Feb 2023
On the optimization and generalization of overparameterized implicit
  neural networks
On the optimization and generalization of overparameterized implicit neural networks
Tianxiang Gao
Hongyang Gao
MLT
AI4CE
11
3
0
30 Sep 2022
On the Proof of Global Convergence of Gradient Descent for Deep ReLU
  Networks with Linear Widths
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
31
49
0
24 Jan 2021
1