236

Convergence Rates of Training Deep Neural Networks via Alternating Minimization Methods

Optimization Letters (Optim. Lett.), 2022
Main:9 Pages
Bibliography:3 Pages
Abstract

Training deep neural networks (DNNs) is an important and challenging optimization problem in machine learning due to its non-convexity and non-separable structure. The alternating minimization (AM) approaches split the composition structure of DNNs and have drawn great interest in the deep learning and optimization communities. In this paper, we propose a unified framework for analyzing the convergence rate of AM-type network training methods. Our analysis are based on the jj-step sufficient decrease conditions and the Kurdyka-Lojasiewicz (KL) property, which relaxes the requirement of designing descent algorithms. We show the detailed local convergence rate if the KL exponent θ\theta varies in [0,1)[0,1). Moreover, the local R-linear convergence is discussed under a stronger jj-step sufficient decrease condition.

View on arXiv
Comments on this paper