ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.05040
  4. Cited By
How do infinite width bounded norm networks look in function space?

How do infinite width bounded norm networks look in function space?

13 February 2019
Pedro H. P. Savarese
Itay Evron
Daniel Soudry
Nathan Srebro
ArXivPDFHTML

Papers citing "How do infinite width bounded norm networks look in function space?"

16 / 16 papers shown
Title
The Effects of Multi-Task Learning on ReLU Neural Network Functions
The Effects of Multi-Task Learning on ReLU Neural Network Functions
Julia B. Nakhleh
Joseph Shenouda
Robert D. Nowak
53
1
0
29 Oct 2024
When does compositional structure yield compositional generalization? A kernel theory
When does compositional structure yield compositional generalization? A kernel theory
Samuel Lippl
Kim Stachenfeld
NAI
CoGe
150
8
0
26 May 2024
Function-Space Optimality of Neural Architectures with Multivariate Nonlinearities
Function-Space Optimality of Neural Architectures with Multivariate Nonlinearities
Rahul Parhi
Michael Unser
63
5
0
05 Oct 2023
Penalising the biases in norm regularisation enforces sparsity
Penalising the biases in norm regularisation enforces sparsity
Etienne Boursier
Nicolas Flammarion
74
17
0
02 Mar 2023
Overfitting or perfect fitting? Risk bounds for classification and
  regression rules that interpolate
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate
M. Belkin
Daniel J. Hsu
P. Mitra
AI4CE
125
256
0
13 Jun 2018
Implicit Bias of Gradient Descent on Linear Convolutional Networks
Implicit Bias of Gradient Descent on Linear Convolutional Networks
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
MDE
60
408
0
01 Jun 2018
On the Global Convergence of Gradient Descent for Over-parameterized
  Models using Optimal Transport
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Lénaïc Chizat
Francis R. Bach
OT
157
731
0
24 May 2018
The Implicit Bias of Gradient Descent on Separable Data
The Implicit Bias of Gradient Descent on Separable Data
Daniel Soudry
Elad Hoffer
Mor Shpigel Nacson
Suriya Gunasekar
Nathan Srebro
77
908
0
27 Oct 2017
On the ability of neural nets to express distributions
On the ability of neural nets to express distributions
Holden Lee
Rong Ge
Tengyu Ma
Andrej Risteski
Sanjeev Arora
BDL
58
84
0
22 Feb 2017
Outrageously Large Neural Networks: The Sparsely-Gated
  Mixture-of-Experts Layer
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Noam M. Shazeer
Azalia Mirhoseini
Krzysztof Maziarz
Andy Davis
Quoc V. Le
Geoffrey E. Hinton
J. Dean
MoE
160
2,614
0
23 Jan 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
269
4,620
0
10 Nov 2016
Wide & Deep Learning for Recommender Systems
Wide & Deep Learning for Recommender Systems
Heng-Tze Cheng
L. Koc
Jeremiah Harmsen
T. Shaked
Tushar Chandra
...
Zakaria Haque
Lichan Hong
Vihan Jain
Xiaobing Liu
Hemal Shah
HAI
VLM
135
3,646
0
24 Jun 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
236
583
0
27 Feb 2015
Breaking the Curse of Dimensionality with Convex Neural Networks
Breaking the Curse of Dimensionality with Convex Neural Networks
Francis R. Bach
118
703
0
30 Dec 2014
In Search of the Real Inductive Bias: On the Role of Implicit
  Regularization in Deep Learning
In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
AI4CE
81
655
0
20 Dec 2014
The Lasso Problem and Uniqueness
The Lasso Problem and Uniqueness
Robert Tibshirani
137
550
0
01 Jun 2012
1