ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.04992
  4. Cited By
Towards a regularity theory for ReLU networks -- chain rule and global
  error estimates

Towards a regularity theory for ReLU networks -- chain rule and global error estimates

International Conference on Sampling Theory and Applications (SampTA), 2019
13 May 2019
Julius Berner
Dennis Elbrächter
Philipp Grohs
Arnulf Jentzen
    AI4CE
ArXiv (abs)PDFHTML

Papers citing "Towards a regularity theory for ReLU networks -- chain rule and global error estimates"

4 / 4 papers shown
Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficient
Hamiltonian Monte Carlo on ReLU Neural Networks is InefficientNeural Information Processing Systems (NeurIPS), 2024
Vu C. Dinh
L. Ho
Cuong V Nguyen
148
2
0
29 Oct 2024
Upper and lower bounds for the Lipschitz constant of random neural networks
Upper and lower bounds for the Lipschitz constant of random neural networks
Paul Geuchen
Dominik Stöger
Dominik Stöger
Felix Voigtlaender
AAML
486
0
0
02 Nov 2023
Robust SDE-Based Variational Formulations for Solving Linear PDEs via
  Deep Learning
Robust SDE-Based Variational Formulations for Solving Linear PDEs via Deep LearningInternational Conference on Machine Learning (ICML), 2022
Lorenz Richter
Julius Berner
228
20
0
21 Jun 2022
How degenerate is the parametrization of neural networks with the ReLU
  activation function?
How degenerate is the parametrization of neural networks with the ReLU activation function?Neural Information Processing Systems (NeurIPS), 2019
Julius Berner
Dennis Elbrächter
Philipp Grohs
ODL
252
28
0
23 May 2019
1