ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.04008
  4. Cited By
Infinitely wide limits for deep Stable neural networks: sub-linear,
  linear and super-linear activation functions

Infinitely wide limits for deep Stable neural networks: sub-linear, linear and super-linear activation functions

8 April 2023
Alberto Bordino
Stefano Favaro
S. Fortini
ArXivPDFHTML

Papers citing "Infinitely wide limits for deep Stable neural networks: sub-linear, linear and super-linear activation functions"

6 / 6 papers shown
Title
Proportional infinite-width infinite-depth limit for deep linear neural
  networks
Proportional infinite-width infinite-depth limit for deep linear neural networks
Federico Bassetti
Lucia Ladelli
P. Rotondo
62
0
0
22 Nov 2024
Wide stable neural networks: Sample regularity, functional convergence
  and Bayesian inverse problems
Wide stable neural networks: Sample regularity, functional convergence and Bayesian inverse problems
Tomás Soto
14
0
0
04 Jul 2024
Gaussian random field approximation via Stein's method with applications
  to wide random neural networks
Gaussian random field approximation via Stein's method with applications to wide random neural networks
Krishnakumar Balasubramanian
L. Goldstein
Nathan Ross
Adil Salim
4
8
0
28 Jun 2023
Posterior Inference on Shallow Infinitely Wide Bayesian Neural Networks
  under Weights with Unbounded Variance
Posterior Inference on Shallow Infinitely Wide Bayesian Neural Networks under Weights with Unbounded Variance
Jorge Loría
A. Bhadra
UQCV
BDL
19
1
0
18 May 2023
Large-width asymptotics for ReLU neural networks with $α$-Stable
  initializations
Large-width asymptotics for ReLU neural networks with ααα-Stable initializations
Stefano Favaro
S. Fortini
Stefano Peluchetti
12
1
0
16 Jun 2022
Deep neural networks with dependent weights: Gaussian Process mixture
  limit, heavy tails, sparsity and compressibility
Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Hoileong Lee
Fadhel Ayed
Paul Jung
Juho Lee
Hongseok Yang
François Caron
25
8
0
17 May 2022
1