17
0

Follow the STARs: Dynamic ωω-Regular Shielding of Learned Policies

Abstract

This paper presents a novel dynamic post-shielding framework that enforces the full class of ω\omega-regular correctness properties over pre-computed probabilistic policies. This constitutes a paradigm shift from the predominant setting of safety-shielding -- i.e., ensuring that nothing bad ever happens -- to a shielding process that additionally enforces liveness -- i.e., ensures that something good eventually happens. At the core, our method uses Strategy-Template-based Adaptive Runtime Shields (STARs), which leverage permissive strategy templates to enable post-shielding with minimal interference. As its main feature, STARs introduce a mechanism to dynamically control interference, allowing a tunable enforcement parameter to balance formal obligations and task-specific behavior at runtime. This allows to trigger more aggressive enforcement when needed, while allowing for optimized policy choices otherwise. In addition, STARs support runtime adaptation to changing specifications or actuator failures, making them especially suited for cyber-physical applications. We evaluate STARs on a mobile robot benchmark to demonstrate their controllable interference when enforcing (incrementally updated) ω\omega-regular correctness properties over learned probabilistic policies.

View on arXiv
@article{anand2025_2505.14689,
  title={ Follow the STARs: Dynamic $ω$-Regular Shielding of Learned Policies },
  author={ Ashwani Anand and Satya Prakash Nayak and Ritam Raha and Anne-Kathrin Schmuck },
  journal={arXiv preprint arXiv:2505.14689},
  year={ 2025 }
}
Comments on this paper