ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18216
76
0

Adaptive Rank Allocation: Speeding Up Modern Transformers with RaNA Adapters

23 March 2025
Roberto Garcia
Jerry Liu
Daniel Sorvisto
Sabri Eyuboglu
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are computationally intensive, particularly during inference. Neuron-adaptive techniques, which selectively activate neurons in Multi-Layer Perceptron (MLP) layers, offer some speedups but suffer from limitations in modern Transformers. These include reliance on sparse activations, incompatibility with attention layers, and the use of costly neuron masking techniques. To address these issues, we propose the Adaptive Rank Allocation framework and introduce the Rank and Neuron Allocator (RaNA) adapter. RaNA adapters leverage rank adapters, which operate on linear layers by applying both low-rank matrix decompositions and adaptive masking to efficiently allocate compute without depending on activation sparsity. This enables RaNA to be generally applied to MLPs and linear components of attention modules, while eliminating the need for expensive maskers found in neuron-adaptive methods. Notably, when compared to neuron adapters, RaNA improves perplexity by up to 7 points and increases accuracy by up to 8 percentage-points when reducing FLOPs by ∼\sim∼44% in state-of-the-art Transformer architectures. These results position RaNA as a robust solution for improving inference efficiency in modern Transformer architectures.

View on arXiv
@article{garcia2025_2503.18216,
  title={ Adaptive Rank Allocation: Speeding Up Modern Transformers with RaNA Adapters },
  author={ Roberto Garcia and Jerry Liu and Daniel Sorvisto and Sabri Eyuboglu },
  journal={arXiv preprint arXiv:2503.18216},
  year={ 2025 }
}
Comments on this paper