ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18845
71
1

Sliding Window Attention Training for Efficient Large Language Models

26 February 2025
Zichuan Fu
Wentao Song
Y. Wang
X. Wu
Yefeng Zheng
Yingying Zhang
Derong Xu
Xuetao Wei
Tong Bill Xu
Xiangyu Zhao
ArXivPDFHTML
Abstract

Recent advances in transformer-based Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks. However, their quadratic computational complexity concerning sequence length remains a significant bottleneck for processing long documents. As a result, many efforts like sparse attention and state space models have been proposed to improve the efficiency of LLMs over long sequences. Though effective, these approaches compromise the performance or introduce structural complexity. This calls for a simple yet efficient model that preserves the fundamental Transformer architecture. To this end, we introduce SWAT, which enables efficient long-context handling via Sliding Window Attention Training. This paper first attributes the inefficiency of Transformers to the attention sink phenomenon resulting from the high variance of softmax operation. Then, we replace softmax with the sigmoid function and utilize a balanced ALiBi and Rotary Position Embedding for efficient information compression and retention. Experiments demonstrate that SWAT achieves SOTA performance compared with state-of-the-art linear recurrent architectures on eight benchmarks. Code is available atthis https URL.

View on arXiv
@article{fu2025_2502.18845,
  title={ Sliding Window Attention Training for Efficient Large Language Models },
  author={ Zichuan Fu and Wentao Song and Yejing Wang and Xian Wu and Yefeng Zheng and Yingying Zhang and Derong Xu and Xuetao Wei and Tong Xu and Xiangyu Zhao },
  journal={arXiv preprint arXiv:2502.18845},
  year={ 2025 }
}
Comments on this paper