235

GradES: Significantly Faster Training in Transformers with Gradient-Based Early Stopping

Main:12 Pages
6 Figures
Bibliography:3 Pages
14 Tables
Appendix:4 Pages
Abstract

Early stopping monitors global validation loss and halts all parameter updates simultaneously, which is computationally costly for large transformers due to the extended time required for validation inference. We propose GradES, a novel gradient-based early stopping approach that operates within transformer components (attention projections and Feed-Forward layer matrices). We found that different components converge at varying rates during fine-tuning. GradES tracks the magnitude of gradients in backpropagation for these matrices during training. When a projection matrix's gradients fall below a convergence threshold τ\tau, we exclude that projection matrix from further updates individually, eliminating costly validation passes while allowing slow converging matrices to continue learning. By strategically freezing parameters when their gradients converge, GradES speeds up training time by 1.57--7.22×\times while simultaneously enhancing generalization through early prevention of overfitting, resulting in 1.2% higher average accuracy.

View on arXiv
Comments on this paper