ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.01440
  4. Cited By
LCA: Loss Change Allocation for Neural Network Training

LCA: Loss Change Allocation for Neural Network Training

3 September 2019
Janice Lan
Rosanne Liu
Hattie Zhou
J. Yosinski
ArXivPDFHTML

Papers citing "LCA: Loss Change Allocation for Neural Network Training"

9 / 9 papers shown
Title
Gradient Mask: Lateral Inhibition Mechanism Improves Performance in
  Artificial Neural Networks
Gradient Mask: Lateral Inhibition Mechanism Improves Performance in Artificial Neural Networks
Lei Jiang
Yongqing Liu
Shihai Xiao
Yansong Chua
28
0
0
14 Aug 2022
Adversarial Parameter Defense by Multi-Step Risk Minimization
Adversarial Parameter Defense by Multi-Step Risk Minimization
Zhiyuan Zhang
Ruixuan Luo
Xuancheng Ren
Qi Su
Liangyou Li
Xu Sun
AAML
23
6
0
07 Sep 2021
Experiments with Rich Regime Training for Deep Learning
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
26
2
0
26 Feb 2021
Exploring the Vulnerability of Deep Neural Networks: A Study of
  Parameter Corruption
Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption
Xu Sun
Zhiyuan Zhang
Xuancheng Ren
Ruixuan Luo
Liangyou Li
17
39
0
10 Jun 2020
The Break-Even Point on Optimization Trajectories of Deep Neural
  Networks
The Break-Even Point on Optimization Trajectories of Deep Neural Networks
Stanislaw Jastrzebski
Maciej Szymczak
Stanislav Fort
Devansh Arpit
Jacek Tabor
Kyunghyun Cho
Krzysztof J. Geras
40
154
0
21 Feb 2020
TRADI: Tracking deep neural network weight distributions for uncertainty
  estimation
TRADI: Tracking deep neural network weight distributions for uncertainty estimation
Gianni Franchi
Andrei Bursuc
Emanuel Aldea
Séverine Dubuisson
Isabelle Bloch
UQCV
20
51
0
24 Dec 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,888
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,634
0
03 Jul 2012
1