ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1609.04836
  4. Cited By
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
v1v2 (latest)

On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima

15 September 2016
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
    ODL
ArXiv (abs)PDFHTML

Papers citing "On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima"

50 / 1,653 papers shown
MoTE: Reconciling Generalization with Specialization for Visual-Language
  to Video Knowledge Transfer
MoTE: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge TransferNeural Information Processing Systems (NeurIPS), 2024
Minghao Zhu
Zhengpu Wang
Mengxian Hu
Ronghao Dang
Xiao Lin
Xun Zhou
Chengju Liu
Qijun Chen
257
3
0
14 Oct 2024
What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian Analysis
What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian AnalysisInternational Conference on Learning Representations (ICLR), 2024
Weronika Ormaniec
Felix Dangel
Sidak Pal Singh
544
9
0
14 Oct 2024
Sharpness-Aware Minimization Efficiently Selects Flatter Minima Late in Training
Sharpness-Aware Minimization Efficiently Selects Flatter Minima Late in TrainingInternational Conference on Learning Representations (ICLR), 2024
Zhanpeng Zhou
Mingze Wang
Yuchen Mao
Bingrui Li
Junchi Yan
AAML
479
9
0
14 Oct 2024
How Learning Dynamics Drive Adversarially Robust Generalization?
How Learning Dynamics Drive Adversarially Robust Generalization?
Yuelin Xu
Xiao Zhang
AAML
408
1
0
10 Oct 2024
OledFL: Unleashing the Potential of Decentralized Federated Learning via
  Opposite Lookahead Enhancement
OledFL: Unleashing the Potential of Decentralized Federated Learning via Opposite Lookahead Enhancement
Qinglun Li
Miao Zhang
Mengzhu Wang
Quanjun Yin
Li Shen
OODDFedML
235
1
0
09 Oct 2024
QT-DoG: Quantization-aware Training for Domain Generalization
QT-DoG: Quantization-aware Training for Domain Generalization
Saqib Javed
Hieu Le
Mathieu Salzmann
OODMQ
333
6
0
08 Oct 2024
Extended convexity and smoothness and their applications in deep learning
Extended convexity and smoothness and their applications in deep learning
Binchuan Qi
Wei Gong
Li Li
430
0
0
08 Oct 2024
Incremental Learning for Robot Shared Autonomy
Incremental Learning for Robot Shared Autonomy
Yiran Tao
Guixiu Qiao
Dan Ding
Zackory Erickson
CLL
411
0
0
08 Oct 2024
Improved Sample Complexity for Private Nonsmooth Nonconvex Optimization
Improved Sample Complexity for Private Nonsmooth Nonconvex Optimization
Guy Kornowski
Daogao Liu
Kunal Talwar
235
3
0
08 Oct 2024
Intriguing Properties of Large Language and Vision Models
Intriguing Properties of Large Language and Vision Models
Young-Jun Lee
ByungSoo Ko
Han-Gyu Kim
Yechan Hwang
Ho-Jin Choi
LRMVLM
292
0
0
07 Oct 2024
Improving Generalization with Flat Hilbert Bayesian Inference
Improving Generalization with Flat Hilbert Bayesian Inference
Tuan Truong
Quyen Tran
Quan Pham-Ngoc
Nhat Ho
Dinh Q. Phung
T. Le
442
3
0
05 Oct 2024
Towards Better Generalization: Weight Decay Induces Low-rank Bias for
  Neural Networks
Towards Better Generalization: Weight Decay Induces Low-rank Bias for Neural Networks
Ke Chen
Chugang Yi
Haizhao Yang
MLT
184
2
0
03 Oct 2024
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption RobustnessInternational Conference on Learning Representations (ICLR), 2024
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
Decebal Constantin Mocanu
Elena Mocanu
OOD3DH
514
6
0
03 Oct 2024
Revisiting Video Quality Assessment from the Perspective of
  Generalization
Revisiting Video Quality Assessment from the Perspective of Generalization
Xinli Yue
Jianhui Sun
Liangchao Yao
Fan Xia
Yuetang Deng
...
Lei Li
Fengyun Rao
Jing Lv
Qian Wang
Lingchen Zhao
MoMe
198
0
0
23 Sep 2024
Bilateral Sharpness-Aware Minimization for Flatter Minima
Bilateral Sharpness-Aware Minimization for Flatter Minima
Jiaxin Deng
Junbiao Pang
Baochang Zhang
Qingming Huang
AAML
936
0
0
20 Sep 2024
Hidden Activations Are Not Enough: A General Approach to Neural Network
  Predictions
Hidden Activations Are Not Enough: A General Approach to Neural Network Predictions
Samuel Leblanc
Aiky Rasolomanana
Marco Armenta
228
0
0
20 Sep 2024
Efficient Training of Deep Neural Operator Networks via Randomized Sampling
Efficient Training of Deep Neural Operator Networks via Randomized Sampling
Sharmila Karumuri
Lori Graham-Brady
Somdatta Goswami
243
7
0
20 Sep 2024
Convergence of Sharpness-Aware Minimization Algorithms using Increasing
  Batch Size and Decaying Learning Rate
Convergence of Sharpness-Aware Minimization Algorithms using Increasing Batch Size and Decaying Learning Rate
Hinata Harada
Hideaki Iiduka
259
1
0
16 Sep 2024
WaterMAS: Sharpness-Aware Maximization for Neural Network Watermarking
WaterMAS: Sharpness-Aware Maximization for Neural Network WatermarkingInternational Conference on Pattern Recognition (ICPR), 2024
Carl De Sousa Trias
Mihai P. Mitrea
Attilio Fiandrotti
Marco Cagnazzo
Sumanta Chaudhuri
Enzo Tartaglione
AAML
230
1
0
05 Sep 2024
Improving Robustness to Multiple Spurious Correlations by
  Multi-Objective Optimization
Improving Robustness to Multiple Spurious Correlations by Multi-Objective OptimizationInternational Conference on Machine Learning (ICML), 2024
Nayeong Kim
Juwon Kang
Sungsoo Ahn
Jungseul Ok
Suha Kwak
236
5
0
05 Sep 2024
CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP ModelsNetwork and Distributed System Security Symposium (NDSS), 2024
Rui Zeng
Xi Chen
Yuwen Pu
Xuhong Zhang
Tianyu Du
Shouling Ji
356
16
0
02 Sep 2024
Fisher Information guided Purification against Backdoor Attacks
Fisher Information guided Purification against Backdoor AttacksConference on Computer and Communications Security (CCS), 2024
Nazmul Karim
Abdullah Al Arafat
Adnan Siraj Rakin
Zhishan Guo
Nazanin Rahnavard
AAML
329
5
0
01 Sep 2024
Deep Learning to Predict Late-Onset Breast Cancer Metastasis: the Single
  Hyperparameter Grid Search (SHGS) Strategy for Meta Tuning Concerning Deep
  Feed-forward Neural Network
Deep Learning to Predict Late-Onset Breast Cancer Metastasis: the Single Hyperparameter Grid Search (SHGS) Strategy for Meta Tuning Concerning Deep Feed-forward Neural Network
Yijun Zhou
Om Arora-Jain
Xia Jiang
OOD
234
3
0
28 Aug 2024
Can Optimization Trajectories Explain Multi-Task Transfer?
Can Optimization Trajectories Explain Multi-Task Transfer?
David Mueller
Mark Dredze
Nicholas Andrews
397
2
0
26 Aug 2024
Weight Scope Alignment: A Frustratingly Easy Method for Model Merging
Weight Scope Alignment: A Frustratingly Easy Method for Model MergingEuropean Conference on Artificial Intelligence (ECAI), 2024
Yichu Xu
Xin-Chun Li
Le Gan
De-Chuan Zhan
MoMe
293
2
0
22 Aug 2024
A Noncontact Technique for Wave Measurement Based on Thermal
  Stereography and Deep Learning
A Noncontact Technique for Wave Measurement Based on Thermal Stereography and Deep LearningIEEE Transactions on Instrumentation and Measurement (IEEE Trans. Instrum. Meas.), 2024
Deyu Li
L. Xiao
Handi Wei
Yan Li
Binghua Zhang
229
0
0
20 Aug 2024
Enhancing Adversarial Transferability with Adversarial Weight Tuning
Enhancing Adversarial Transferability with Adversarial Weight TuningAAAI Conference on Artificial Intelligence (AAAI), 2024
Jiahao Chen
Zhou Feng
Rui Zeng
Yuwen Pu
Chunyi Zhou
Yi Jiang
Yuyou Gan
Jinbao Li
S. Ji
AAML
353
8
0
18 Aug 2024
Information-Theoretic Progress Measures reveal Grokking is an Emergent
  Phase Transition
Information-Theoretic Progress Measures reveal Grokking is an Emergent Phase Transition
Kenzo Clauw
S. Stramaglia
Daniele Marinazzo
204
8
0
16 Aug 2024
Rubick: Exploiting Job Reconfigurability for Deep Learning Cluster
  Scheduling
Rubick: Exploiting Job Reconfigurability for Deep Learning Cluster Scheduling
Xinyi Zhang
Hanyu Zhao
Wencong Xiao
Chencan Wu
Fei Xu
Yong Li
Wei Lin
Fangming Liu
150
5
0
16 Aug 2024
Enhancing Sharpness-Aware Minimization by Learning Perturbation Radius
Enhancing Sharpness-Aware Minimization by Learning Perturbation Radius
Xuehao Wang
Weisen Jiang
Shuai Fu
Yu Zhang
AAML
244
1
0
15 Aug 2024
Implicit Neural Representation For Accurate CFD Flow Field Prediction
Implicit Neural Representation For Accurate CFD Flow Field Prediction
L. D. Vito
Nils Pinnau
Simone Dey
AI4CE
291
1
0
12 Aug 2024
Do Sharpness-based Optimizers Improve Generalization in Medical Image
  Analysis?
Do Sharpness-based Optimizers Improve Generalization in Medical Image Analysis?IEEE Access (IEEE Access), 2024
Mohamed Hassan
Aleksandar Vakanski
Min Xian
AAMLMedIm
387
3
0
07 Aug 2024
Exploring Loss Landscapes through the Lens of Spin Glass Theory
Exploring Loss Landscapes through the Lens of Spin Glass Theory
Hao Liao
Wei Zhang
Zhanyi Huang
Zexiao Long
Mingyang Zhou
Xiaoqun Wu
Rui Mao
Chi Ho Yeung
248
2
0
30 Jul 2024
Characterizing Dynamical Stability of Stochastic Gradient Descent in Overparameterized Learning
Characterizing Dynamical Stability of Stochastic Gradient Descent in Overparameterized Learning
Dennis Chemnitz
Maximilian Engel
278
3
0
29 Jul 2024
Local vs Global continual learning
Local vs Global continual learning
Giulia Lanzillotta
Sidak Pal Singh
Benjamin Grewe
Thomas Hofmann
CLL
259
0
0
23 Jul 2024
Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance
Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance
Haiquan Lu
Xiaotian Liu
Yefan Zhou
Qunli Li
Kurt Keutzer
Michael W. Mahoney
Yujun Yan
Huanrui Yang
Yaoqing Yang
196
2
0
17 Jul 2024
Overcoming Catastrophic Forgetting in Federated Class-Incremental
  Learning via Federated Global Twin Generator
Overcoming Catastrophic Forgetting in Federated Class-Incremental Learning via Federated Global Twin Generator
Thinh Nguyen
Khoa D. Doan
Binh T. Nguyen
Danh Le-Phuoc
Kok-Seng Wong
FedML
210
2
0
13 Jul 2024
Harmony in Diversity: Merging Neural Networks with Canonical Correlation
  Analysis
Harmony in Diversity: Merging Neural Networks with Canonical Correlation Analysis
Stefan Horoi
Albert Manuel Orozco Camacho
Eugene Belilovsky
Guy Wolf
FedMLMoMe
225
12
0
07 Jul 2024
Multimodal Classification via Modal-Aware Interactive Enhancement
Multimodal Classification via Modal-Aware Interactive Enhancement
Qing-Yuan Jiang
Zhouyang Chi
Yang Yang
227
3
0
05 Jul 2024
Simplifying Deep Temporal Difference Learning
Simplifying Deep Temporal Difference Learning
Matteo Gallici
Mattie Fellows
Benjamin Ellis
B. Pou
Ivan Masmitja
Jakob Foerster
Mario Martin
OffRL
612
53
0
05 Jul 2024
PaSE: Parallelization Strategies for Efficient DNN Training
PaSE: Parallelization Strategies for Efficient DNN Training
Venmugil Elango
161
12
0
04 Jul 2024
Bias of Stochastic Gradient Descent or the Architecture: Disentangling the Effects of Overparameterization of Neural Networks
Bias of Stochastic Gradient Descent or the Architecture: Disentangling the Effects of Overparameterization of Neural Networks
Amit Peleg
Matthias Hein
279
0
0
04 Jul 2024
Curvature Clues: Decoding Deep Learning Privacy with Input Loss
  Curvature
Curvature Clues: Decoding Deep Learning Privacy with Input Loss Curvature
Deepak Ravikumar
Efstathia Soufleri
Kaushik Roy
180
4
0
03 Jul 2024
Enhancing Accuracy and Parameter-Efficiency of Neural Representations
  for Network Parameterization
Enhancing Accuracy and Parameter-Efficiency of Neural Representations for Network Parameterization
Hongjun Choi
Jayaraman J. Thiagarajan
Ruben Glatt
Shusen Liu
334
3
0
29 Jun 2024
On the Trade-off between Flatness and Optimization in Distributed Learning
On the Trade-off between Flatness and Optimization in Distributed Learning
Ying Cao
Zhaoxian Wu
Kun Yuan
Ali H. Sayed
470
4
0
28 Jun 2024
On Scaling Up 3D Gaussian Splatting Training
On Scaling Up 3D Gaussian Splatting TrainingInternational Conference on Learning Representations (ICLR), 2024
Hexu Zhao
Haoyang Weng
Daohan Lu
A. Li
Jinyang Li
Aurojit Panda
Saining Xie
3DGS
298
37
0
26 Jun 2024
MAGIC: Meta-Ability Guided Interactive Chain-of-Distillation for
  Effective-and-Efficient Vision-and-Language Navigation
MAGIC: Meta-Ability Guided Interactive Chain-of-Distillation for Effective-and-Efficient Vision-and-Language Navigation
Liuyi Wang
Zongtao He
Mengjiao Shen
Jingwei Yang
Chengju Liu
Qijun Chen
VLM
330
3
0
25 Jun 2024
Improving robustness to corruptions with multiplicative weight
  perturbations
Improving robustness to corruptions with multiplicative weight perturbations
Trung Trinh
Markus Heinonen
Luigi Acerbi
Samuel Kaski
216
2
0
24 Jun 2024
MD tree: a model-diagnostic tree grown on loss landscape
MD tree: a model-diagnostic tree grown on loss landscape
Yefan Zhou
Jianlong Chen
Qinxue Cao
Konstantin Schürholt
Yaoqing Yang
296
2
0
24 Jun 2024
Effect of Random Learning Rate: Theoretical Analysis of SGD Dynamics in Non-Convex Optimization via Stationary Distribution
Effect of Random Learning Rate: Theoretical Analysis of SGD Dynamics in Non-Convex Optimization via Stationary Distribution
Naoki Yoshida
Shogo H. Nakakita
Masaaki Imaizumi
253
1
0
23 Jun 2024
Previous
123456...323334
Next