ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.03134
61
39
v1v2 (latest)

Gaussian processes with linear operator inequality constraints

10 January 2019
C. Agrell
ArXiv (abs)PDFHTML
Abstract

This paper presents an approach for constrained Gaussian Process (GP) regression where we assume that a set of linear transformations of the process are bounded. It is motivated by machine learning applications for high-consequence engineering systems, where this kind of information is often made available from phenomenological knowledge, and the resulting constraints may be essential to achieve the level of confidence needed. We consider a GP fff over functions on X⊂Rn\mathcal{X} \subset \mathbb{R}^{n}X⊂Rn taking values in R\mathbb{R}R, where the process Lf\mathcal{L}fLf is still Gaussian when L\mathcal{L}L is a linear operator. Our goal is to model fff under the constraint that realizations of Lf\mathcal{L}fLf are confined to a convex set of functions. In particular we require that a≤Lf≤ba \leq \mathcal{L}f \leq ba≤Lf≤b given two functions aaa and bbb where a<ba < ba<b pointwise. This formulation provides a consistent way of encoding multiple linear constraints, such as shape-constraints based on e.g. boundedness, monotonicity or convexity as a relevant example. We adopt the approach of using a sufficiently dense set of virtual observation locations where the constraint is required to hold, and derive the exact posterior for a conjugate likelihood. The results needed for stable numerical implementation are derived, together with an efficient sampling scheme for estimating the posterior process which is exact in the limit. A few numerical examples focusing on noiseless observations are given. This is relevant for computer code emulation and is also more computationally demanding than the alternative scenario with i.i.d. Gaussian noise.

View on arXiv
Comments on this paper