Scaling On-Device GPU Inference for Large Generative Models

Driven by the advancements in generative AI, large machine learning models have revolutionized domains such as image processing, audio synthesis, and speech recognition. While server-based deployments remain the locus of peak performance, the imperative for on-device inference, necessitated by privacy and efficiency considerations, persists. Recognizing GPUs as the on-device ML accelerator with the widest reach, we present ML Drift--an optimized framework that extends the capabilities of state-of-the-art GPU-accelerated inference engines. ML Drift enables on-device execution of generative AI workloads which contain 10 to 100x more parameters than existing on-device generative AI models. ML Drift addresses intricate engineering challenges associated with cross-GPU API development, and ensures broad compatibility across mobile and desktop/laptop platforms, thereby facilitating the deployment of significantly more complex models on resource-constrained devices. Our GPU-accelerated ML/AI inference engine achieves an order-of-magnitude performance improvement relative to existing open-source GPU inference engines.
View on arXiv@article{tang2025_2505.00232, title={ Scaling On-Device GPU Inference for Large Generative Models }, author={ Jiuqiang Tang and Raman Sarokin and Ekaterina Ignasheva and Grant Jensen and Lin Chen and Juhyun Lee and Andrei Kulik and Matthias Grundmann }, journal={arXiv preprint arXiv:2505.00232}, year={ 2025 } }