ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.08367
  4. Cited By
Gradient Descent Quantizes ReLU Network Features

Gradient Descent Quantizes ReLU Network Features

22 March 2018
Hartmut Maennel
Olivier Bousquet
Sylvain Gelly
    MLT
ArXiv (abs)PDFHTML

Papers citing "Gradient Descent Quantizes ReLU Network Features"

5 / 55 papers shown
Title
Gradient Dynamics of Shallow Univariate ReLU Networks
Gradient Dynamics of Shallow Univariate ReLU Networks
Francis Williams
Matthew Trager
Claudio Silva
Daniele Panozzo
Denis Zorin
Joan Bruna
70
80
0
18 Jun 2019
Implicit regularization for deep neural networks driven by an
  Ornstein-Uhlenbeck like process
Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
Guy Blanc
Neha Gupta
Gregory Valiant
Paul Valiant
167
147
0
19 Apr 2019
Implicit Regularization in Over-parameterized Neural Networks
Implicit Regularization in Over-parameterized Neural Networks
M. Kubo
Ryotaro Banno
Hidetaka Manabe
Masataka Minoji
76
23
0
05 Mar 2019
Training Neural Networks as Learning Data-adaptive Kernels: Provable
  Representation and Approximation Benefits
Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits
Xialiang Dou
Tengyuan Liang
MLT
83
42
0
21 Jan 2019
Randomized Prior Functions for Deep Reinforcement Learning
Randomized Prior Functions for Deep Reinforcement Learning
Ian Osband
John Aslanides
Albin Cassirer
UQCVBDL
87
380
0
08 Jun 2018
Previous
12