ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.00071
67
10
v1v2v3 (latest)

Safety Guarantees for Planning Based on Iterative Gaussian Processes

29 November 2019
Kyriakos Polymenakos
Luca Laurenti
A. Patané
Jan-Peter Calliess
L. Cardelli
Marta Z. Kwiatkowska
Alessandro Abate
Stephen J. Roberts
ArXiv (abs)PDFHTML
Abstract

Gaussian Processes (GPs) are widely employed in control and learning because of their principled treatment of uncertainty. However, tracking uncertainty for iterative, multi-step predictions in general leads to an analytically intractable problem. While approximation methods exist, they do not come with guarantees, making it difficult to estimate their reliability and to trust their predictions. In this work, we derive formal probability error bounds for iterative prediction and planning with GPs. Building on GP properties, we bound the probability that random trajectories lie in specific regions around the predicted values. Namely, given a tolerance ϵ>0\epsilon > 0 ϵ>0, we compute regions around the predicted trajectory values, such that GP trajectories are guaranteed to lie inside them with probability at least 1−ϵ1-\epsilon1−ϵ. We verify experimentally that our method tracks the predictive uncertainty correctly, even when current approximation techniques fail. Furthermore, we show how the proposed bounds can be employed within a safe reinforcement learning framework to verify the safety of candidate control policies, guiding the synthesis of provably safe controllers.

View on arXiv
Comments on this paper