ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.08124
  4. Cited By
Can Neural Nets Learn the Same Model Twice? Investigating
  Reproducibility and Double Descent from the Decision Boundary Perspective

Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective

15 March 2022
Gowthami Somepalli
Liam H. Fowl
Arpit Bansal
Ping Yeh-Chiang
Yehuda Dar
Richard Baraniuk
Micah Goldblum
Tom Goldstein
ArXivPDFHTML

Papers citing "Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective"

18 / 18 papers shown
Title
Towards Model Resistant to Transferable Adversarial Examples via Trigger Activation
Towards Model Resistant to Transferable Adversarial Examples via Trigger Activation
Yi Yu
Song Xia
Xun Lin
Chenqi Kong
Wenhan Yang
Shijian Lu
Yap-Peng Tan
Alex C. Kot
AAML
SILM
148
0
0
20 Apr 2025
The Curious Case of Arbitrariness in Machine Learning
Prakhar Ganesh
Afaf Taik
G. Farnadi
59
2
0
28 Jan 2025
Latent Space Translation via Inverse Relative Projection
Latent Space Translation via Inverse Relative Projection
Valentino Maiorca
Luca Moschella
Marco Fumero
Francesco Locatello
Emanuele Rodolà
34
1
0
21 Jun 2024
Deep neural networks architectures from the perspective of manifold
  learning
Deep neural networks architectures from the perspective of manifold learning
German Magai
AAML
AI4CE
24
6
0
06 Jun 2023
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Max Klabunde
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
52
64
0
10 May 2023
On the Variance of Neural Network Training with respect to Test Sets and
  Distributions
On the Variance of Neural Network Training with respect to Test Sets and Distributions
Keller Jordan
OOD
18
10
0
04 Apr 2023
SplineCam: Exact Visualization and Characterization of Deep Network
  Geometry and Decision Boundaries
SplineCam: Exact Visualization and Characterization of Deep Network Geometry and Decision Boundaries
Ahmed Imtiaz Humayun
Randall Balestriero
Guha Balakrishnan
Richard Baraniuk
24
17
0
24 Feb 2023
On the Lipschitz Constant of Deep Networks and Double Descent
On the Lipschitz Constant of Deep Networks and Double Descent
Matteo Gamba
Hossein Azizpour
Marten Bjorkman
25
7
0
28 Jan 2023
Task Discovery: Finding the Tasks that Neural Networks Generalize on
Task Discovery: Finding the Tasks that Neural Networks Generalize on
Andrei Atanov
Andrei Filatov
Teresa Yeo
Ajay Sohmshetty
Amir Zamir
OOD
40
10
0
01 Dec 2022
The Vanishing Decision Boundary Complexity and the Strong First
  Component
The Vanishing Decision Boundary Complexity and the Strong First Component
Hengshuai Yao
UQCV
30
0
0
25 Nov 2022
Towards Good Practices in Evaluating Transfer Adversarial Attacks
Towards Good Practices in Evaluating Transfer Adversarial Attacks
Zhengyu Zhao
Hanwei Zhang
Renjue Li
R. Sicre
Laurent Amsaleg
Michael Backes
AAML
24
20
0
17 Nov 2022
Deep Double Descent via Smooth Interpolation
Deep Double Descent via Smooth Interpolation
Matteo Gamba
Erik Englesson
Marten Bjorkman
Hossein Azizpour
61
10
0
21 Sep 2022
Random initialisations performing above chance and how to find them
Random initialisations performing above chance and how to find them
Frederik Benzing
Simon Schug
Robert Meier
J. Oswald
Yassir Akram
Nicolas Zucchet
Laurence Aitchison
Angelika Steger
ODL
35
24
0
15 Sep 2022
Effectiveness of Function Matching in Driving Scene Recognition
Effectiveness of Function Matching in Driving Scene Recognition
Shingo Yashima
23
1
0
20 Aug 2022
Linear Connectivity Reveals Generalization Strategies
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
237
45
0
24 May 2022
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
86
72
0
29 Sep 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
271
2,603
0
04 May 2021
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
1