57
0

Deep Learning Optimization Using Self-Adaptive Weighted Auxiliary Variables

Abstract

In this paper, we develop a new optimization framework for the least squares learning problem via fully connected neural networks or physics-informed neural networks. The gradient descent sometimes behaves inefficiently in deep learning because of the high non-convexity of loss functions and the vanishing gradient issue. Our idea is to introduce auxiliary variables to separate the layers of the deep neural networks and reformulate the loss functions for ease of optimization. We design the self-adaptive weights to preserve the consistency between the reformulated loss and the original mean squared loss, which guarantees that optimizing the new loss helps optimize the original problem. Numerical experiments are presented to verify the consistency and show the effectiveness and robustness of our models over gradient descent.

View on arXiv
@article{liu2025_2504.21501,
  title={ Deep Learning Optimization Using Self-Adaptive Weighted Auxiliary Variables },
  author={ Yaru Liu and Yiqi Gu and Michael K. Ng },
  journal={arXiv preprint arXiv:2504.21501},
  year={ 2025 }
}
Comments on this paper