ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.08187
  4. Cited By
Deep neural networks with dependent weights: Gaussian Process mixture
  limit, heavy tails, sparsity and compressibility

Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility

17 May 2022
Hoileong Lee
Fadhel Ayed
Paul Jung
Juho Lee
Hongseok Yang
François Caron
ArXivPDFHTML

Papers citing "Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility"

4 / 4 papers shown
Title
Deep Kernel Posterior Learning under Infinite Variance Prior Weights
Deep Kernel Posterior Learning under Infinite Variance Prior Weights
Jorge Loría
A. Bhadra
BDL
UQCV
47
0
0
02 Oct 2024
Infinitely wide limits for deep Stable neural networks: sub-linear,
  linear and super-linear activation functions
Infinitely wide limits for deep Stable neural networks: sub-linear, linear and super-linear activation functions
Alberto Bordino
Stefano Favaro
S. Fortini
17
5
0
08 Apr 2023
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
François Caron
Fadhel Ayed
Paul Jung
Hoileong Lee
Juho Lee
Hongseok Yang
39
2
0
02 Feb 2023
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
164
51
0
17 Oct 2019
1