ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.19309
25
3

SDPRLayers: Certifiable Backpropagation Through Polynomial Optimization Problems in Robotics

29 May 2024
Connor T. Holmes
Frederike Dumbgen
Tim D. Barfoot
ArXivPDFHTML
Abstract

A recent set of techniques in the robotics community, known as certifiably correct methods, frames robotics problems as polynomial optimization problems (POPs) and applies convex, semidefinite programming (SDP) relaxations to either find or certify their global optima. In parallel, differentiable optimization allows optimization problems to be embedded into end-to-end learning frameworks and has received considerable attention in the robotics community. In this paper, we consider the ill effect of convergence to spurious local minima in the context of learning frameworks that use differentiable optimization. We present SDPRLayers, an approach that seeks to address this issue by combining convex relaxations with implicit differentiation techniques to provide certifiably correct solutions and gradients throughout the training process. We provide theoretical results that outline conditions for the correctness of these gradients and provide efficient means for their computation. Our approach is first applied to two simple-but-demonstrative simulated examples, which expose the potential pitfalls of reliance on local optimization in existing, state-of-the-art, differentiable optimization methods. We then apply our method in a real-world application: we train a deep neural network to detect image keypoints for robot localization in challenging lighting conditions. We provide our open-source, PyTorch implementation of SDPRLayers.

View on arXiv
Comments on this paper