135

Scaling Up Efficient Small Language Models Serving and Deployment for Semantic Job Search

Main:10 Pages
2 Figures
Bibliography:2 Pages
10 Tables
Abstract

Large Language Models (LLMs) have demonstrated impressive quality when applied to predictive tasks such as relevance ranking and semantic search. However, deployment of such LLMs remains prohibitively expensive for industry applications with strict latency and throughput requirements. In this work, we present lessons and efficiency insights from developing a purely text-based decoder-only Small Language Model (SLM) for a semantic search application at LinkedIn. Particularly, we discuss model compression techniques such as pruning that allow us to reduce the model size by up to 40%40\% while maintaining the accuracy. Additionally, we present context compression techniques that allow us to reduce the input context length by up to 1010x with minimal loss of accuracy. Finally, we present practical lessons from optimizing the serving infrastructure for deploying such a system on GPUs at scale, serving millions of requests per second. Taken together, this allows us to increase our system's throughput by 1010x in a real-world deployment, while meeting our quality bar.

View on arXiv
Comments on this paper