ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.21309
36
0

FANformer: Improving Large Language Models Through Effective Periodicity Modeling

28 February 2025
Yihong Dong
G. Li
Xue Jiang
Yongding Tao
Kechi Zhang
Hao Zhu
Huanyu Liu
Jiazheng Ding
Jia Li
Jinliang Deng
Hong Mei
    AI4TS
ArXivPDFHTML
Abstract

Periodicity, as one of the most important basic characteristics, lays the foundation for facilitating structured knowledge acquisition and systematic cognitive processes within human learning paradigms. However, the potential flaws of periodicity modeling in Transformer affect the learning efficiency and establishment of underlying principles from data for large language models (LLMs) built upon it. In this paper, we demonstrate that integrating effective periodicity modeling can improve the learning efficiency and performance of LLMs. We introduce FANformer, which integrates Fourier Analysis Network (FAN) into attention mechanism to achieve efficient periodicity modeling, by modifying the feature projection process of attention mechanism. Extensive experimental results on language modeling show that FANformer consistently outperforms Transformer when scaling up model size and training tokens, underscoring its superior learning efficiency. To further validate the effectiveness of FANformer, we pretrain a FANformer-1B on 1 trillion tokens. FANformer-1B exhibits marked improvements on downstream tasks compared to open-source LLMs with similar model parameters or training tokens. The results position FANformer as an effective and promising architecture for advancing LLMs.

View on arXiv
@article{dong2025_2502.21309,
  title={ FANformer: Improving Large Language Models Through Effective Periodicity Modeling },
  author={ Yihong Dong and Ge Li and Xue Jiang and Yongding Tao and Kechi Zhang and Hao Zhu and Huanyu Liu and Jiazheng Ding and Jia Li and Jinliang Deng and Hong Mei },
  journal={arXiv preprint arXiv:2502.21309},
  year={ 2025 }
}
Comments on this paper