ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.01855
15
0

Intra-Layer Recurrence in Transformers for Language Modeling

3 May 2025
Anthony Nguyen
Wenjun Lin
ArXivPDFHTML
Abstract

Transformer models have established new benchmarks in natural language processing; however, their increasing depth results in substantial growth in parameter counts. While existing recurrent transformer methods address this issue by reprocessing layers multiple times, they often apply recurrence indiscriminately across entire blocks of layers. In this work, we investigate Intra-Layer Recurrence (ILR), a more targeted approach that applies recurrence selectively to individual layers within a single forward pass. Our experiments show that allocating more iterations to earlier layers yields optimal results. These findings suggest that ILR offers a promising direction for optimizing recurrent structures in transformer architectures.

View on arXiv
@article{nguyen2025_2505.01855,
  title={ Intra-Layer Recurrence in Transformers for Language Modeling },
  author={ Anthony Nguyen and Wenjun Lin },
  journal={arXiv preprint arXiv:2505.01855},
  year={ 2025 }
}
Comments on this paper