ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.06615
  4. Cited By
Precise characterization of the prior predictive distribution of deep
  ReLU networks

Precise characterization of the prior predictive distribution of deep ReLU networks

11 June 2021
Lorenzo Noci
Gregor Bachmann
Kevin Roth
Sebastian Nowozin
Thomas Hofmann
    BDL
    UQCV
ArXivPDFHTML

Papers citing "Precise characterization of the prior predictive distribution of deep ReLU networks"

24 / 24 papers shown
Title
Don't be lazy: CompleteP enables compute-efficient deep transformers
Don't be lazy: CompleteP enables compute-efficient deep transformers
Nolan Dey
Bin Claire Zhang
Lorenzo Noci
Mufan Bill Li
Blake Bordelon
Shane Bergsma
C. Pehlevan
Boris Hanin
Joel Hestness
39
0
0
02 May 2025
Understanding and Minimising Outlier Features in Neural Network Training
Understanding and Minimising Outlier Features in Neural Network Training
Bobby He
Lorenzo Noci
Daniele Paliotta
Imanol Schlag
Thomas Hofmann
34
3
0
29 May 2024
Bayesian RG Flow in Neural Network Field Theories
Bayesian RG Flow in Neural Network Field Theories
Jessica N. Howard
Marc S. Klinger
Anindita Maiti
A. G. Stapleton
62
1
0
27 May 2024
Differential Equation Scaling Limits of Shaped and Unshaped Neural
  Networks
Differential Equation Scaling Limits of Shaped and Unshaped Neural Networks
Mufan Bill Li
Mihai Nica
18
2
0
18 Oct 2023
Commutative Width and Depth Scaling in Deep Neural Networks
Commutative Width and Depth Scaling in Deep Neural Networks
Soufiane Hayou
41
2
0
02 Oct 2023
On the Disconnect Between Theory and Practice of Neural Networks: Limits
  of the NTK Perspective
On the Disconnect Between Theory and Practice of Neural Networks: Limits of the NTK Perspective
Jonathan Wenger
Felix Dangel
Agustinus Kristiadi
25
0
0
29 Sep 2023
Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and
  Scaling Limit
Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit
Blake Bordelon
Lorenzo Noci
Mufan Bill Li
Boris Hanin
C. Pehlevan
27
23
0
28 Sep 2023
A Primer on Bayesian Neural Networks: Review and Debates
A Primer on Bayesian Neural Networks: Review and Debates
Federico Danieli
Konstantinos Pitas
M. Vladimirova
Vincent Fortuin
BDL
AAML
56
18
0
28 Sep 2023
Quantitative CLTs in Deep Neural Networks
Quantitative CLTs in Deep Neural Networks
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
23
11
0
12 Jul 2023
The Shaped Transformer: Attention Models in the Infinite Depth-and-Width
  Limit
The Shaped Transformer: Attention Models in the Infinite Depth-and-Width Limit
Lorenzo Noci
Chuning Li
Mufan Bill Li
Bobby He
Thomas Hofmann
Chris J. Maddison
Daniel M. Roy
33
29
0
30 Jun 2023
Structures of Neural Network Effective Theories
Structures of Neural Network Effective Theories
cCaugin Ararat
Tianji Cai
Cem Tekin
Zhengkang Zhang
47
7
0
03 May 2023
Width and Depth Limits Commute in Residual Networks
Width and Depth Limits Commute in Residual Networks
Soufiane Hayou
Greg Yang
42
14
0
01 Feb 2023
Deterministic equivalent and error universality of deep random features
  learning
Deterministic equivalent and error universality of deep random features learning
Dominik Schröder
Hugo Cui
Daniil Dmitriev
Bruno Loureiro
MLT
29
28
0
01 Feb 2023
Bayesian Interpolation with Deep Linear Networks
Bayesian Interpolation with Deep Linear Networks
Boris Hanin
Alexander Zlokapa
34
25
0
29 Dec 2022
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width
  Bayesian Neural Networks
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width Bayesian Neural Networks
Jiayu Yao
Yaniv Yacoby
Beau Coker
Weiwei Pan
Finale Doshi-Velez
19
1
0
16 Nov 2022
Signal Propagation in Transformers: Theoretical Perspectives and the
  Role of Rank Collapse
Signal Propagation in Transformers: Theoretical Perspectives and the Role of Rank Collapse
Lorenzo Noci
Sotiris Anagnostidis
Luca Biggio
Antonio Orvieto
Sidak Pal Singh
Aurélien Lucchi
43
65
0
07 Jun 2022
The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at
  Initialization
The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization
Mufan Bill Li
Mihai Nica
Daniel M. Roy
33
36
0
06 Jun 2022
Gaussian Pre-Activations in Neural Networks: Myth or Reality?
Gaussian Pre-Activations in Neural Networks: Myth or Reality?
Pierre Wolinski
Julyan Arbel
AI4CE
68
8
0
24 May 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the
  Statistical Mechanics of Deep Neural Networks
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
27
0
0
03 Jan 2022
Dependence between Bayesian neural network units
Dependence between Bayesian neural network units
M. Vladimirova
Julyan Arbel
Stéphane Girard
BDL
14
3
0
29 Nov 2021
Bayesian neural network unit priors and generalized Weibull-tail
  property
Bayesian neural network unit priors and generalized Weibull-tail property
M. Vladimirova
Julyan Arbel
Stéphane Girard
BDL
54
9
0
06 Oct 2021
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Boris Hanin
BDL
21
43
0
04 Jul 2021
A self consistent theory of Gaussian Processes captures feature learning
  effects in finite CNNs
A self consistent theory of Gaussian Processes captures feature learning effects in finite CNNs
Gadi Naveh
Z. Ringel
SSL
MLT
15
31
0
08 Jun 2021
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
173
51
0
17 Oct 2019
1