ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.19901
32
0

TokenHSI: Unified Synthesis of Physical Human-Scene Interactions through Task Tokenization

25 March 2025
Liang Pan
Zeshi Yang
Zhiyang Dou
Wenjia Wang
Buzhen Huang
Bo Dai
Taku Komura
Jingbo Wang
ArXivPDFHTML
Abstract

Synthesizing diverse and physically plausible Human-Scene Interactions (HSI) is pivotal for both computer animation and embodied AI. Despite encouraging progress, current methods mainly focus on developing separate controllers, each specialized for a specific interaction task. This significantly hinders the ability to tackle a wide variety of challenging HSI tasks that require the integration of multiple skills, e.g., sitting down while carrying an object. To address this issue, we present TokenHSI, a single, unified transformer-based policy capable of multi-skill unification and flexible adaptation. The key insight is to model the humanoid proprioception as a separate shared token and combine it with distinct task tokens via a masking mechanism. Such a unified policy enables effective knowledge sharing across skills, thereby facilitating the multi-task training. Moreover, our policy architecture supports variable length inputs, enabling flexible adaptation of learned skills to new scenarios. By training additional task tokenizers, we can not only modify the geometries of interaction targets but also coordinate multiple skills to address complex tasks. The experiments demonstrate that our approach can significantly improve versatility, adaptability, and extensibility in various HSI tasks. Website:this https URL

View on arXiv
@article{pan2025_2503.19901,
  title={ TokenHSI: Unified Synthesis of Physical Human-Scene Interactions through Task Tokenization },
  author={ Liang Pan and Zeshi Yang and Zhiyang Dou and Wenjia Wang and Buzhen Huang and Bo Dai and Taku Komura and Jingbo Wang },
  journal={arXiv preprint arXiv:2503.19901},
  year={ 2025 }
}
Comments on this paper