41
0

Neural Lyapunov Function Approximation with Self-Supervised Reinforcement Learning

Abstract

Control Lyapunov functions are traditionally used to design a controller which ensures convergence to a desired state, yet deriving these functions for nonlinear systems remains a complex challenge. This paper presents a novel, sample-efficient method for neural approximation of nonlinear Lyapunov functions, leveraging self-supervised Reinforcement Learning (RL) to enhance training data generation, particularly for inaccurately represented regions of the state space. The proposed approach employs a data-driven World Model to train Lyapunov functions from off-policy trajectories. The method is validated on both standard and goal-conditioned robotic tasks, demonstrating faster convergence and higher approximation accuracy compared to the state-of-the-art neural Lyapunov approximation baseline. The code is available at:this https URL

View on arXiv
@article{mccutcheon2025_2503.15629,
  title={ Neural Lyapunov Function Approximation with Self-Supervised Reinforcement Learning },
  author={ Luc McCutcheon and Bahman Gharesifard and Saber Fallah },
  journal={arXiv preprint arXiv:2503.15629},
  year={ 2025 }
}
Comments on this paper