14
0

Challenging GPU Dominance: When CPUs Outperform for On-Device LLM Inference

Abstract

The common assumption in on-device AI is that GPUs, with their superior parallel processing, always provide the best performance for large language model (LLM) inference. In this work, we challenge this notion by empirically demonstrating that, under certain conditions, CPUs can outperform GPUs for LLM inference on mobile devices. Using a 1-billion-parameter LLM deployed viathis http URLon the iPhone 15 Pro, we show that a CPU-only configuration (two threads, F16 precision) achieves 17 tokens per second, surpassing the 12.8 tokens per second obtained with GPU acceleration. We analyze the architectural factors driving this counterintuitive result, revealing that GPU memory transfer overhead and CPU thread optimization play a critical role. Furthermore, we explore the impact of thread oversubscription, quantization strategies, and hardware constraints, providing new insights into efficient on-device AI execution. Our findings challenge conventional GPU-first thinking, highlighting the untapped potential of optimized CPU inference and paving the way for smarter deployment strategies in mobile AI. However, fully explaining the observed CPU advantage remains difficult due to limited access to low-level profiling tools on iOS.

View on arXiv
@article{zhang2025_2505.06461,
  title={ Challenging GPU Dominance: When CPUs Outperform for On-Device LLM Inference },
  author={ Haolin Zhang and Jeff Huang },
  journal={arXiv preprint arXiv:2505.06461},
  year={ 2025 }
}
Comments on this paper