ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.13326
17
9

Safe Learning under Uncertain Objectives and Constraints

23 June 2020
Mohammad Fereydounian
Zebang Shen
Aryan Mokhtari
Amin Karbasi
Hamed Hassani
ArXivPDFHTML
Abstract

In this paper, we consider non-convex optimization problems under \textit{unknown} yet safety-critical constraints. Such problems naturally arise in a variety of domains including robotics, manufacturing, and medical procedures, where it is infeasible to know or identify all the constraints. Therefore, the parameter space should be explored in a conservative way to ensure that none of the constraints are violated during the optimization process once we start from a safe initialization point. To this end, we develop an algorithm called Reliable Frank-Wolfe (Reliable-FW). Given a general non-convex function and an unknown polytope constraint, Reliable-FW simultaneously learns the landscape of the objective function and the boundary of the safety polytope. More precisely, by assuming that Reliable-FW has access to a (stochastic) gradient oracle of the objective function and a noisy feasibility oracle of the safety polytope, it finds an ϵ\epsilonϵ-approximate first-order stationary point with the optimal O(1/ϵ2){\mathcal{O}}({1}/{\epsilon^2})O(1/ϵ2) gradient oracle complexity (resp. O~(1/ϵ3)\tilde{\mathcal{O}}({1}/{\epsilon^3})O~(1/ϵ3) (also optimal) in the stochastic gradient setting), while ensuring the safety of all the iterates. Rather surprisingly, Reliable-FW only makes O~((d2/ϵ2)log⁡1/δ)\tilde{\mathcal{O}}(({d^2}/{\epsilon^2})\log 1/\delta)O~((d2/ϵ2)log1/δ) queries to the noisy feasibility oracle (resp. O~((d2/ϵ4)log⁡1/δ)\tilde{\mathcal{O}}(({d^2}/{\epsilon^4})\log 1/\delta)O~((d2/ϵ4)log1/δ) in the stochastic gradient setting) where ddd is the dimension and δ\deltaδ is the reliability parameter, tightening the existing bounds even for safe minimization of convex functions. We further specialize our results to the case that the objective function is convex. A crucial component of our analysis is to introduce and apply a technique called geometric shrinkage in the context of safe optimization.

View on arXiv
Comments on this paper