Latent Embedding Adaptation for Human Preference Alignment in Diffusion Planners

This work addresses the challenge of personalizing trajectories generated in automated decision-making systems by introducing a resource-efficient approach that enables rapid adaptation to individual users' preferences. Our method leverages a pretrained conditional diffusion model with Preference Latent Embeddings (PLE), trained on a large, reward-free offline dataset. The PLE serves as a compact representation for capturing specific user preferences. By adapting the pretrained model using our proposed preference inversion method, which directly optimizes the learnable PLE, we achieve superior alignment with human preferences compared to existing solutions like Reinforcement Learning from Human Feedback (RLHF) and Low-Rank Adaptation (LoRA). To better reflect practical applications, we create a benchmark experiment using real human preferences on diverse, high-reward trajectories.
View on arXiv@article{ng2025_2503.18347, title={ Latent Embedding Adaptation for Human Preference Alignment in Diffusion Planners }, author={ Wen Zheng Terence Ng and Jianda Chen and Yuan Xu and Tianwei Zhang }, journal={arXiv preprint arXiv:2503.18347}, year={ 2025 } }