ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.01842
243
0
v1v2v3 (latest)

GradES: Significantly Faster Training in Transformers with Gradient-Based Early Stopping

1 September 2025
Qifu Wen
Xi Zeng
Zihan Zhou
Shuaijun Liu
M. Hosseinzadeh
Ningxin Su
Reza Rawassizadeh
ArXiv (abs)PDFHTMLHuggingFace (4 upvotes)Github (2★)
Main:12 Pages
6 Figures
Bibliography:3 Pages
14 Tables
Appendix:4 Pages
Abstract

Early stopping monitors global validation loss and halts all parameter updates simultaneously, which is computationally costly for large transformers due to the extended time required for validation inference. We propose \textit{GradES}, a novel gradient-based early stopping approach that operates within transformer components (attention projections and Feed-Forward layer matrices). We found that different components converge at varying rates during fine-tuning for both language and vision-language models. \textit{GradES} tracks the magnitude of gradient changes in backpropagation for these matrices during training. When a projection matrix's magnitude of gradient changes fall below a convergence threshold τ\tauτ, we exclude that projection matrix from further updates individually, eliminating costly validation passes while allowing slow converging matrices to continue learning. \textit{GradES} speeds up training time by 1.57--7.22×\times× while simultaneously enhancing generalization through early prevention of overfitting, resulting in 1.2\% higher average accuracy in language tasks and 3.88\% on multimodal benchmarks.

View on arXiv
Comments on this paper