ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.07384
  4. Cited By
The Geometry of Neural Nets' Parameter Spaces Under Reparametrization

The Geometry of Neural Nets' Parameter Spaces Under Reparametrization

14 February 2023
Agustinus Kristiadi
Felix Dangel
Philipp Hennig
ArXivPDFHTML

Papers citing "The Geometry of Neural Nets' Parameter Spaces Under Reparametrization"

11 / 11 papers shown
Title
Improving Learning to Optimize Using Parameter Symmetries
Improving Learning to Optimize Using Parameter Symmetries
Guy Zamir
Aryan Dokania
B. Zhao
Rose Yu
22
0
0
21 Apr 2025
Fast Deep Hedging with Second-Order Optimization
Fast Deep Hedging with Second-Order Optimization
Konrad Mueller
Amira Akkari
Lukas Gonon
Ben Wood
ODL
24
0
0
29 Oct 2024
Reparameterization invariance in approximate Bayesian inference
Reparameterization invariance in approximate Bayesian inference
Hrittik Roy
M. Miani
Carl Henrik Ek
Philipp Hennig
Marvin Pfortner
Lukas Tatzel
Søren Hauberg
BDL
42
8
0
05 Jun 2024
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI
Theodore Papamarkou
Maria Skoularidou
Konstantina Palla
Laurence Aitchison
Julyan Arbel
...
David Rügamer
Yee Whye Teh
Max Welling
Andrew Gordon Wilson
Ruqi Zhang
UQCV
BDL
35
27
0
01 Feb 2024
Riemannian Laplace Approximation with the Fisher Metric
Riemannian Laplace Approximation with the Fisher Metric
Hanlin Yu
Marcelo Hartmann
Bernardo Williams
M. Girolami
Arto Klami
19
3
0
05 Nov 2023
On the Disconnect Between Theory and Practice of Neural Networks: Limits
  of the NTK Perspective
On the Disconnect Between Theory and Practice of Neural Networks: Limits of the NTK Perspective
Jonathan Wenger
Felix Dangel
Agustinus Kristiadi
25
0
0
29 Sep 2023
Riemannian Laplace approximations for Bayesian neural networks
Riemannian Laplace approximations for Bayesian neural networks
Federico Bergamin
Pablo Moreno-Muñoz
Søren Hauberg
Georgios Arvanitidis
BDL
22
6
0
12 Jun 2023
Understanding Gradient Descent on Edge of Stability in Deep Learning
Understanding Gradient Descent on Edge of Stability in Deep Learning
Sanjeev Arora
Zhiyuan Li
A. Panigrahi
MLT
75
88
0
19 May 2022
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework
Zhiyuan Li
Tianhao Wang
Sanjeev Arora
MLT
83
98
0
13 Oct 2021
Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning
  Dynamics
Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics
D. Kunin
Javier Sagastuy-Breña
Surya Ganguli
Daniel L. K. Yamins
Hidenori Tanaka
99
77
0
08 Dec 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,878
0
15 Sep 2016
1