100
0

Improving the Convergence Rates of Forward Gradient Descent with Repeated Sampling

Abstract

Forward gradient descent (FGD) has been proposed as a biologically more plausible alternative of gradient descent as it can be computed without backward pass. Considering the linear model with dd parameters, previous work has found that the prediction error of FGD is, however, by a factor dd slower than the prediction error of stochastic gradient descent (SGD). In this paper we show that by computing \ell FGD steps based on each training sample, this suboptimality factor becomes d/(d)d/(\ell \wedge d) and thus the suboptimality of the rate disappears if d.\ell \gtrsim d. We also show that FGD with repeated sampling can adapt to low-dimensional structure in the input distribution. The main mathematical challenge lies in controlling the dependencies arising from the repeated sampling process.

View on arXiv
Comments on this paper