Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2104.11044
Cited By
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
22 April 2021
James Lucas
Juhan Bae
Michael Ruogu Zhang
Stanislav Fort
R. Zemel
Roger C. Grosse
MoMe
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes"
22 / 22 papers shown
Title
In Search of the Successful Interpolation: On the Role of Sharpness in CLIP Generalization
Alireza Abdollahpoorrostam
23
0
0
21 Oct 2024
The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof
Derek Lim
Moe Putterman
Robin Walters
Haggai Maron
Stefanie Jegelka
43
5
0
30 May 2024
Visualizing, Rethinking, and Mining the Loss Landscape of Deep Neural Networks
Xin-Chun Li
Lan Li
De-Chuan Zhan
33
2
0
21 May 2024
Merging by Matching Models in Task Parameter Subspaces
Derek Tam
Mohit Bansal
Colin Raffel
MoMe
21
10
0
07 Dec 2023
Proving Linear Mode Connectivity of Neural Networks via Optimal Transport
Damien Ferbach
Baptiste Goujaud
Gauthier Gidel
Aymeric Dieuleveut
MoMe
21
16
0
29 Oct 2023
No Wrong Turns: The Simple Geometry Of Neural Networks Optimization Paths
Charles Guille-Escuret
Hiroki Naganuma
Kilian Fatras
Ioannis Mitliagkas
16
3
0
20 Jun 2023
Edit at your own risk: evaluating the robustness of edited models to distribution shifts
Davis Brown
Charles Godfrey
Cody Nizinski
Jonathan Tu
Henry Kvinge
KELM
29
8
0
28 Feb 2023
Phase diagram of early training dynamics in deep neural networks: effect of the learning rate, depth, and width
Dayal Singh Kalra
M. Barkeshli
15
9
0
23 Feb 2023
Editing Models with Task Arithmetic
Gabriel Ilharco
Marco Tulio Ribeiro
Mitchell Wortsman
Suchin Gururangan
Ludwig Schmidt
Hannaneh Hajishirzi
Ali Farhadi
KELM
MoMe
MU
57
435
0
08 Dec 2022
Linear Interpolation In Parameter Space is Good Enough for Fine-Tuned Language Models
Mark Rofin
Nikita Balagansky
Daniil Gavrilov
MoMe
KELM
36
5
0
22 Nov 2022
Class Interference of Deep Neural Networks
Dongcui Diao
Hengshuai Yao
Bei Jiang
15
1
0
31 Oct 2022
Plateau in Monotonic Linear Interpolation -- A "Biased" View of Loss Landscape for Deep Networks
Xiang Wang
Annie Wang
Mo Zhou
Rong Ge
MoMe
160
10
0
03 Oct 2022
Model Zoos: A Dataset of Diverse Populations of Neural Network Models
Konstantin Schurholt
Diyar Taskiran
Boris Knyazev
Xavier Giró-i-Nieto
Damian Borth
52
29
0
29 Sep 2022
Special Properties of Gradient Descent with Large Learning Rates
Amirkeivan Mohtashami
Martin Jaggi
Sebastian U. Stich
MLT
21
8
0
30 May 2022
FuNNscope: Visual microscope for interactively exploring the loss landscape of fully connected neural networks
Aleksandar Doknic
Torsten Moller
20
2
0
09 Apr 2022
Fusing finetuned models for better pretraining
Leshem Choshen
Elad Venezian
Noam Slonim
Yoav Katz
FedML
AI4CE
MoMe
45
87
0
06 Apr 2022
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
Sharan Vaswani
Benjamin Dubois-Taine
Reza Babanezhad
51
11
0
21 Oct 2021
Robust fine-tuning of zero-shot models
Mitchell Wortsman
Gabriel Ilharco
Jong Wook Kim
Mike Li
Simon Kornblith
...
Raphael Gontijo-Lopes
Hannaneh Hajishirzi
Ali Farhadi
Hongseok Namkoong
Ludwig Schmidt
VLM
58
689
0
04 Sep 2021
On Accelerating Distributed Convex Optimizations
Kushal Chakrabarti
Nirupam Gupta
Nikhil Chopra
21
7
0
19 Aug 2021
What can linear interpolation of neural network loss landscapes tell us?
Tiffany J. Vlaar
Jonathan Frankle
MoMe
27
27
0
30 Jun 2021
End-To-End Bias Mitigation: Removing Gender Bias in Deep Learning
Tal Feldman
Ashley Peake
FaML
11
13
0
06 Apr 2021
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
234
0
04 Mar 2020
1