ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.19205
  4. Cited By
From Activation to Initialization: Scaling Insights for Optimizing
  Neural Fields

From Activation to Initialization: Scaling Insights for Optimizing Neural Fields

28 March 2024
Hemanth Saratchandran
Sameera Ramasinghe
Simon Lucey
    AI4CE
ArXivPDFHTML

Papers citing "From Activation to Initialization: Scaling Insights for Optimizing Neural Fields"

3 / 3 papers shown
Title
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Taesun Yeom
Sangyoon Lee
Jaeho Lee
46
2
0
07 Oct 2024
On the effectiveness of neural priors in modeling dynamical systems
On the effectiveness of neural priors in modeling dynamical systems
Sameera Ramasinghe
Hemanth Saratchandran
Violetta Shevchenko
Simon Lucey
16
2
0
10 Mar 2023
On the Proof of Global Convergence of Gradient Descent for Deep ReLU
  Networks with Linear Widths
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
28
49
0
24 Jan 2021
1