60

Anchor-MoE: A Mean-Anchored Mixture of Experts For Probabilistic Regression

Main:16 Pages
5 Figures
Bibliography:2 Pages
5 Tables
Abstract

Regression under uncertainty is fundamental across science and engineering. We present an Anchored Mixture of Experts (Anchor-MoE), a model that handles both probabilistic and point regression. For simplicity, we use a tuned gradient-boosting model to furnish the anchor mean; however, any off-the-shelf point regressor can serve as the anchor. The anchor prediction is projected into a latent space, where a learnable metric-window kernel scores locality and a soft router dispatches each sample to a small set of mixture-density-network experts; the experts produce a heteroscedastic correction and predictive variance. We train by minimizing negative log-likelihood, and on a disjoint calibration split fit a post-hoc linear map on predicted means to improve point accuracy. On the theory side, assuming a Hölder smooth regression function of order~α\alpha and fixed Lipschitz partition-of-unity weights with bounded overlap, we show that Anchor-MoE attains the minimax-optimal L2L^2 risk rate O ⁣(N2α/(2α+d))O\!\big(N^{-2\alpha/(2\alpha+d)}\big). In addition, the CRPS test generalization gap scales as O~ ⁣((log(Mh)+P+K)/N)\widetilde{O}\!\Big(\sqrt{(\log(Mh)+P+K)/N}\Big); it is logarithmic in MhMh and scales as the square root in PP and KK. Under bounded-overlap routing, KK can be replaced by kk, and any dependence on a latent dimension is absorbed into PP. Under uniformly bounded means and variances, an analogous O~ ⁣((log(Mh)+P+K)/N)\widetilde{O}\!\big(\sqrt{(\log(Mh)+P+K)/N}\big) scaling holds for the test NLL up to constants. Empirically, across standard UCI regressions, Anchor-MoE consistently matches or surpasses the strong NGBoost baseline in RMSE and NLL; on several datasets it achieves new state-of-the-art probabilistic regression results on our benchmark suite. Code is available atthis https URL.

View on arXiv
Comments on this paper