ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.05407
  4. Cited By
Averaging Weights Leads to Wider Optima and Better Generalization

Averaging Weights Leads to Wider Optima and Better Generalization

14 March 2018
Pavel Izmailov
Dmitrii Podoprikhin
T. Garipov
Dmitry Vetrov
A. Wilson
    FedML
    MoMe
ArXivPDFHTML

Papers citing "Averaging Weights Leads to Wider Optima and Better Generalization"

6 / 306 papers shown
Title
Non-local NetVLAD Encoding for Video Classification
Non-local NetVLAD Encoding for Video Classification
Yongyi Tang
Xing Zhang
Jingwen Wang
Shaoxiang Chen
Lin Ma
Yu-Gang Jiang
11
41
0
29 Sep 2018
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
22
429
0
22 Aug 2018
Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers
Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers
Yonatan Geifman
Guy Uziel
Ran El-Yaniv
UQCV
17
132
0
21 May 2018
SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep
  Learning
SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning
W. Wen
Yandan Wang
Feng Yan
Cong Xu
Chunpeng Wu
Yiran Chen
H. Li
21
50
0
21 May 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
275
2,888
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
177
1,185
0
30 Nov 2014
Previous
1234567