Puzzle: Distillation-Based NAS for Inference-Optimized LLMs

Large language models (LLMs) offer remarkable capabilities, yet their high inference costs restrict wider adoption. While increasing parameter counts improves accuracy, it also broadens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a hardware-aware framework that accelerates the inference of LLMs while preserving their capabilities. Using neural architecture search (NAS) at a large-scale, Puzzle optimizes models with tens of billions of parameters. Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization.We showcase our framework's impact via Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B), a publicly available model derived from Llama-3.1-70B-Instruct. Nemotron-51B achieves a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while retaining 98.4% of the original model's benchmark accuracies. Notably, it is the most accurate model supporting single H100 GPU inference with large batch sizes, despite training on only 45B tokens, far fewer than the 15T used to train Llama-70B. Lastly, we derive Llama-3.3-Nemotron-49B-Super-Base to demonstrate Puzzle can retain long-context and that lightweight alignment on these derived models allows them to surpass the parent model in specific capabilities. Our work establishes that powerful LLM models can be optimized for efficient deployment with only negligible loss in quality, underscoring that inference performance, not parameter count alone, should guide model selection.
View on arXiv@article{bercovich2025_2411.19146, title={ Puzzle: Distillation-Based NAS for Inference-Optimized LLMs }, author={ Akhiad Bercovich and Tomer Ronen and Talor Abramovich and Nir Ailon and Nave Assaf and Mohammad Dabbah and Ido Galil and Amnon Geifman and Yonatan Geifman and Izhak Golan and Netanel Haber and Ehud Karpas and Roi Koren and Itay Levy and Pavlo Molchanov and Shahar Mor and Zach Moshe and Najeeb Nabwani and Omri Puny and Ran Rubin and Itamar Schen and Ido Shahaf and Oren Tropp and Omer Ullman Argov and Ran Zilberstein and Ran El-Yaniv }, journal={arXiv preprint arXiv:2411.19146}, year={ 2025 } }