ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2602.03921
33
0

SpecMD: A Comprehensive Study On Speculative Expert Prefetching

3 February 2026
Duc Hoang
Ajay Jaiswal
Mohammad Samragh
Minsik Cho
    MoE
ArXiv (abs)PDFHTML
Main:10 Pages
11 Figures
Bibliography:3 Pages
4 Tables
Appendix:4 Pages
Abstract

Mixture-of-Experts (MoE) models enable sparse expert activation, meaning that only a subset of the model's parameters is used during each inference. However, to translate this sparsity into practical performance, an expert caching mechanism is required. Previous works have proposed hardware-centric caching policies, but how these various caching policies interact with each other and different hardware specification remains poorly understood. To address this gap, we develop \textbf{SpecMD}, a standardized framework for benchmarking ad-hoc cache policies on various hardware configurations. Using SpecMD, we perform an exhaustive benchmarking of several MoE caching strategies, reproducing and extending prior approaches in controlled settings with realistic constraints. Our experiments reveal that MoE expert access is not consistent with temporal locality assumptions (e.g LRU, LFU). Motivated by this observation, we propose \textbf{Least-Stale}, a novel eviction policy that exploits MoE's predictable expert access patterns to reduce collision misses by up to 85×85\times85× over LRU. With such gains, we achieve over 88%88\%88% hit rates with up to 34.7%34.7\%34.7% Time-to-first-token (TTFT) reduction on OLMoE at only 5%5\%5% or 0.6GB0.6GB0.6GB of VRAM cache capacity.

View on arXiv
Comments on this paper