62

LLM Router: Prefill is All You Need

Tanay Varshney
Annie Surla
Michelle Xu
Gomathy Venkata Krishnan
Maximilian Jeblick
David Austin
Neal Vaidya
Davide Onofrio
Main:10 Pages
7 Figures
Bibliography:2 Pages
15 Tables
Appendix:4 Pages
Abstract

LLMs often share comparable benchmark accuracies, but their complementary performance across task subsets suggests that an Oracle router--a theoretical selector with perfect foresight--can significantly surpass standalone model accuracy by navigating model-specific strengths. While current routers rely on fragile semantic signals, we propose using internal prefill activations via Encoder-Target Decoupling--a functional separation between the model providing the predictive signal (the Encoder) and the model whose performance is being estimated (the Target). This allows optimized heterogeneous pairing between unique encoders and target models. We utilize Fisher Separability (J) and Effective Dimensionality (d_eff) as mathematical probes to isolate optimal layer-wise signals, providing the predictive foundation for our SharedTrunkNet architecture. SharedTrunkNet captures up to 45.58% of the accuracy gap between the strongest standalone model and the Oracle while achieving 74.31% cost savings relative to the highest-cost model.

View on arXiv
Comments on this paper