ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.10668
73
2
v1v2 (latest)

Doubly Sparse: Sparse Mixture of Sparse Experts for Efficient Softmax Inference

30 January 2019
Shun Liao
Ting Chen
Tian Lin
Denny Zhou
Chong-Jun Wang
    MoE
ArXiv (abs)PDFHTML
Abstract

Computations for the softmax function are significantly expensive when the number of output classes is large. In this paper, we present a novel softmax inference speedup method, Doubly Sparse Softmax (DS-Softmax), that leverages sparse mixture of sparse experts to efficiently retrieve top-k classes. Different from most existing methods that require and approximate a fixed softmax, our method is learning-based and can adapt softmax weights for a better inference speedup. In particular, our method learns a two-level hierarchy which divides entire output class space into several partially overlapping experts. Each expert is sparse and only contains a subset of output classes. To find top-k classes, a sparse mixture enables us to find the most probable expert quickly, and the sparse expert enables us to search within a small-scale softmax. We empirically conduct evaluation on several real-world tasks, including neural machine translation, language modeling and image classification, and demonstrate that significant computation reductions can be achieved at no performance loss.

View on arXiv
Comments on this paper