ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.16627
83
4

Inference-Time Policy Steering through Human Interactions

25 November 2024
Yanwei Wang
Lirui Wang
Yilun Du
Balakumar Sundaralingam
Xuning Yang
Yu-Wei Chao
Claudia Pérez-DÁrpino
Dieter Fox
Julie Shah
    VGen
ArXivPDFHTML
Abstract

Generative policies trained with human demonstrations can autonomously accomplish multimodal, long-horizon tasks. However, during inference, humans are often removed from the policy execution loop, limiting the ability to guide a pre-trained policy towards a specific sub-goal or trajectory shape among multiple predictions. Naive human intervention may inadvertently exacerbate distribution shift, leading to constraint violations or execution failures. To better align policy output with human intent without inducing out-of-distribution errors, we propose an Inference-Time Policy Steering (ITPS) framework that leverages human interactions to bias the generative sampling process, rather than fine-tuning the policy on interaction data. We evaluate ITPS across three simulated and real-world benchmarks, testing three forms of human interaction and associated alignment distance metrics. Among six sampling strategies, our proposed stochastic sampling with diffusion policy achieves the best trade-off between alignment and distribution shift. Videos are available atthis https URL.

View on arXiv
@article{wang2025_2411.16627,
  title={ Inference-Time Policy Steering through Human Interactions },
  author={ Yanwei Wang and Lirui Wang and Yilun Du and Balakumar Sundaralingam and Xuning Yang and Yu-Wei Chao and Claudia Perez-DÁrpino and Dieter Fox and Julie Shah },
  journal={arXiv preprint arXiv:2411.16627},
  year={ 2025 }
}
Comments on this paper