ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.03668
20
0

Learning Symbolic Persistent Macro-Actions for POMDP Solving Over Time

6 May 2025
Celeste Veronese
Daniele Meli
Alessandro Farinelli
ArXivPDFHTML
Abstract

This paper proposes an integration of temporal logical reasoning and Partially Observable Markov Decision Processes (POMDPs) to achieve interpretable decision-making under uncertainty with macro-actions. Our method leverages a fragment of Linear Temporal Logic (LTL) based on Event Calculus (EC) to generate \emph{persistent} (i.e., constant) macro-actions, which guide Monte Carlo Tree Search (MCTS)-based POMDP solvers over a time horizon, significantly reducing inference time while ensuring robust performance. Such macro-actions are learnt via Inductive Logic Programming (ILP) from a few traces of execution (belief-action pairs), thus eliminating the need for manually designed heuristics and requiring only the specification of the POMDP transition model. In the Pocman and Rocksample benchmark scenarios, our learned macro-actions demonstrate increased expressiveness and generality when compared to time-independent heuristics, indeed offering substantial computational efficiency improvements.

View on arXiv
@article{veronese2025_2505.03668,
  title={ Learning Symbolic Persistent Macro-Actions for POMDP Solving Over Time },
  author={ Celeste Veronese and Daniele Meli and Alessandro Farinelli },
  journal={arXiv preprint arXiv:2505.03668},
  year={ 2025 }
}
Comments on this paper