Dynamic shape computations have become critical in modern machine learning workloads, especially in emerging large language models. The success of these models has driven the demand for their universal deployment across a diverse set of backend environments. In this paper, we present Relax, a compiler abstraction for optimizing end-to-end dynamic machine learning workloads. Relax introduces a cross-level abstraction that encapsulates computational graphs, loop-level tensor programs, and external library calls in a single representation. Relax also introduces first-class symbolic shape annotations to track dynamic shape computations globally across the program, enabling dynamic shape-aware cross-level optimizations. We build an end-to-end compilation framework using the proposed approach to optimize dynamic shape models. Experimental results on LLMs show that Relax delivers performance competitive with state-of-the-art systems across various GPUs and enables deployment of emerging models to a broader set of emerging environments, including mobile phones, embedded devices, and web browsers.
View on arXiv@article{lai2025_2311.02103, title={ Relax: Composable Abstractions for End-to-End Dynamic Machine Learning }, author={ Ruihang Lai and Junru Shao and Siyuan Feng and Steven S. Lyubomirsky and Bohan Hou and Wuwei Lin and Zihao Ye and Hongyi Jin and Yuchen Jin and Jiawei Liu and Lesheng Jin and Yaxing Cai and Ziheng Jiang and Yong Wu and Sunghyun Park and Prakalp Srivastava and Jared G. Roesch and Todd C. Mowry and Tianqi Chen }, journal={arXiv preprint arXiv:2311.02103}, year={ 2025 } }