Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1912.00827
Cited By
A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning
2 December 2019
Ben Adlam
J. Levinson
Jeffrey Pennington
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning"
7 / 7 papers shown
Title
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Edgar Dobriban
MLT
40
19
0
11 Oct 2023
Demystifying Disagreement-on-the-Line in High Dimensions
Dong-Hwan Lee
Behrad Moniri
Xinmeng Huang
Edgar Dobriban
Hamed Hassani
21
8
0
31 Jan 2023
A Solvable Model of Neural Scaling Laws
A. Maloney
Daniel A. Roberts
J. Sully
36
51
0
30 Oct 2022
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
37
13
0
22 Oct 2021
Analysis of One-Hidden-Layer Neural Networks via the Resolvent Method
Vanessa Piccolo
Dominik Schröder
18
8
0
11 May 2021
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,743
0
26 Sep 2016
1