ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.01255
  4. Cited By
Provably Safe Model-Based Meta Reinforcement Learning: An
  Abstraction-Based Approach

Provably Safe Model-Based Meta Reinforcement Learning: An Abstraction-Based Approach

3 September 2021
Xiaowu Sun
Wael Fatnassi
Ulices Santa Cruz
Yasser Shoukry
ArXiv (abs)PDFHTML

Papers citing "Provably Safe Model-Based Meta Reinforcement Learning: An Abstraction-Based Approach"

2 / 2 papers shown
Federated reinforcement learning for robot motion planning with
  zero-shot generalization
Federated reinforcement learning for robot motion planning with zero-shot generalization
Zhenyuan Yuan
Siyuan Xu
Minghui Zhu
FedML
317
6
0
20 Mar 2024
Neurosymbolic Motion and Task Planning for Linear Temporal Logic Tasks
Neurosymbolic Motion and Task Planning for Linear Temporal Logic TasksIEEE Transactions on robotics (TRO), 2022
Xiaowu Sun
Yasser Shoukry
167
14
0
11 Oct 2022
1