ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.12927
  4. Cited By
Neural networks with late-phase weights

Neural networks with late-phase weights

25 July 2020
J. Oswald
Seijin Kobayashi
Alexander Meulemans
Christian Henning
Benjamin Grewe
João Sacramento
ArXivPDFHTML

Papers citing "Neural networks with late-phase weights"

14 / 14 papers shown
Title
DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning
  Models
DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning Models
Nastaran Saadati
Minh Pham
Nasla Saleem
Joshua R. Waite
Aditya Balu
Zhanhong Jiang
Chinmay Hegde
Soumik Sarkar
MoMe
35
1
0
11 Apr 2024
Adapt then Unlearn: Exploring Parameter Space Semantics for Unlearning in Generative Adversarial Networks
Adapt then Unlearn: Exploring Parameter Space Semantics for Unlearning in Generative Adversarial Networks
Piyush Tiwary
Atri Guha
Subhodip Panda
Prathosh A.P.
MU
GAN
53
7
0
25 Sep 2023
Understanding the effect of sparsity on neural networks robustness
Understanding the effect of sparsity on neural networks robustness
Lukas Timpl
R. Entezari
Hanie Sedghi
Behnam Neyshabur
O. Saukh
23
11
0
22 Jun 2022
Model soups: averaging weights of multiple fine-tuned models improves
  accuracy without increasing inference time
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman
Gabriel Ilharco
S. Gadre
Rebecca Roelofs
Raphael Gontijo-Lopes
...
Hongseok Namkoong
Ali Farhadi
Y. Carmon
Simon Kornblith
Ludwig Schmidt
MoMe
42
906
1
10 Mar 2022
Deep Ensembling with No Overhead for either Training or Testing: The
  All-Round Blessings of Dynamic Sparsity
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
OOD
23
49
0
28 Jun 2021
Repulsive Deep Ensembles are Bayesian
Repulsive Deep Ensembles are Bayesian
Francesco DÁngelo
Vincent Fortuin
UQCV
BDL
41
93
0
22 Jun 2021
Reinforcement Learning, Bit by Bit
Reinforcement Learning, Bit by Bit
Xiuyuan Lu
Benjamin Van Roy
Vikranth Dwaracherla
M. Ibrahimi
Ian Osband
Zheng Wen
17
70
0
06 Mar 2021
Posterior Meta-Replay for Continual Learning
Posterior Meta-Replay for Continual Learning
Christian Henning
Maria R. Cervera
Francesco DÁngelo
J. Oswald
Regina Traber
Benjamin Ehret
Seijin Kobayashi
Benjamin Grewe
João Sacramento
CLL
BDL
51
54
0
01 Mar 2021
Learning Neural Network Subspaces
Learning Neural Network Subspaces
Mitchell Wortsman
Maxwell Horton
Carlos Guestrin
Ali Farhadi
Mohammad Rastegari
UQCV
11
85
0
20 Feb 2021
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
267
404
0
09 Apr 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
252
11,677
0
09 Mar 2017
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
268
5,660
0
05 Dec 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
249
9,134
0
06 Jun 2015
1