Transformers and large language models (LLMs) have revolutionized machine learning, with attention mechanisms at the core of their success. As the landscape of attention variants expands, so too do the challenges of optimizing their performance, particularly across different hardware platforms. Current optimization strategies are often narrowly focused, requiring extensive manual intervention to accommodate changes in model configurations or hardware environments. In this paper, we introduce AttentionEngine, a comprehensive framework designed to streamline the optimization of attention mechanisms across heterogeneous hardware backends. By decomposing attention computation into modular operations with customizable components, AttentionEngine enables flexible adaptation to diverse algorithmic requirements. The framework further automates kernel optimization through a combination of programmable templates and a robust cross-platform scheduling strategy. Empirical results reveal performance gains of up to 10x on configurations beyond the reach of existing methods. AttentionEngine offers a scalable, efficient foundation for developing and deploying attention mechanisms with minimal manual tuning. Our code has been open-sourced and is available atthis https URL.
View on arXiv@article{chen2025_2502.15349, title={ AttentionEngine: A Versatile Framework for Efficient Attention Mechanisms on Diverse Hardware Platforms }, author={ Feiyang Chen and Yu Cheng and Lei Wang and Yuqing Xia and Ziming Miao and Lingxiao Ma and Fan Yang and Jilong Xue and Zhi Yang and Mao Yang and Haibo Chen }, journal={arXiv preprint arXiv:2502.15349}, year={ 2025 } }