ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.00323
34
69

Convergence of Sparse Variational Inference in Gaussian Processes Regression

1 August 2020
David R. Burt
C. Rasmussen
Mark van der Wilk
ArXivPDFHTML
Abstract

Gaussian processes are distributions over functions that are versatile and mathematically convenient priors in Bayesian modelling. However, their use is often impeded for data with large numbers of observations, NNN, due to the cubic (in NNN) cost of matrix operations used in exact inference. Many solutions have been proposed that rely on M≪NM \ll NM≪N inducing variables to form an approximation at a cost of O(NM2)\mathcal{O}(NM^2)O(NM2). While the computational cost appears linear in NNN, the true complexity depends on how MMM must scale with NNN to ensure a certain quality of the approximation. In this work, we investigate upper and lower bounds on how MMM needs to grow with NNN to ensure high quality approximations. We show that we can make the KL-divergence between the approximate model and the exact posterior arbitrarily small for a Gaussian-noise regression model with M≪NM\ll NM≪N. Specifically, for the popular squared exponential kernel and DDD-dimensional Gaussian distributed covariates, M=O((log⁡N)D)M=\mathcal{O}((\log N)^D)M=O((logN)D) suffice and a method with an overall computational cost of O(N(log⁡N)2D(log⁡log⁡N)2)\mathcal{O}(N(\log N)^{2D}(\log\log N)^2)O(N(logN)2D(loglogN)2) can be used to perform inference.

View on arXiv
Comments on this paper