ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.01539
  4. Cited By
HALO: Human Preference Aligned Offline Reward Learning for Robot Navigation

HALO: Human Preference Aligned Offline Reward Learning for Robot Navigation

3 August 2025
Gershom Seneviratne
Jianyu An
Sahire Ellahy
K. Weerakoon
Mohamed Bashir Elnoor
Jonathan Deepak Kannan
Amogha Thalihalla Sunil
Dinesh Manocha
    OffRL
ArXiv (abs)PDFHTML

Papers citing "HALO: Human Preference Aligned Offline Reward Learning for Robot Navigation"

1 / 1 papers shown
Title
World-in-World: World Models in a Closed-Loop World
World-in-World: World Models in a Closed-Loop World
Jiahan Zhang
Muqing Jiang
Nanru Dai
Taiming Lu
Arda Uzunoglu
...
Rama Chellappa
Tianmin Shu
Alan Yuille
Yilun Du
Jieneng Chen
VGenVLM
192
4
0
20 Oct 2025
1