ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.10060
30
0

PRIMER: Perception-Aware Robust Learning-based Multiagent Trajectory Planner

14 June 2024
Kota Kondo
Claudius T. Tewari
Andrea Tagliabue
J. Tordesillas
Parker C. Lusk
Mason B. Peterson
Jonathan P. How
ArXivPDFHTML
Abstract

In decentralized multiagent trajectory planners, agents need to communicate and exchange their positions to generate collision-free trajectories. However, due to localization errors/uncertainties, trajectory deconfliction can fail even if trajectories are perfectly shared between agents. To address this issue, we first present PARM and PARM*, perception-aware, decentralized, asynchronous multiagent trajectory planners that enable a team of agents to navigate uncertain environments while deconflicting trajectories and avoiding obstacles using perception information. PARM* differs from PARM as it is less conservative, using more computation to find closer-to-optimal solutions. While these methods achieve state-of-the-art performance, they suffer from high computational costs as they need to solve large optimization problems onboard, making it difficult for agents to replan at high rates. To overcome this challenge, we present our second key contribution, PRIMER, a learning-based planner trained with imitation learning (IL) using PARM* as the expert demonstrator. PRIMER leverages the low computational requirements at deployment of neural networks and achieves a computation speed up to 5500 times faster than optimization-based approaches.

View on arXiv
@article{kondo2025_2406.10060,
  title={ PRIMER: Perception-Aware Robust Learning-based Multiagent Trajectory Planner },
  author={ Kota Kondo and Claudius T. Tewari and Andrea Tagliabue and Jesus Tordesillas and Parker C. Lusk and Mason B. Peterson and Jonathan P. How },
  journal={arXiv preprint arXiv:2406.10060},
  year={ 2025 }
}
Comments on this paper