ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22869
29
0

SIGHT: Single-Image Conditioned Generation of Hand Trajectories for Hand-Object Interaction

28 March 2025
Alexey Gavryushin
Florian Redhardt
Gaia Di Lorenzo
Luc Van Gool
Marc Pollefeys
Kaichun Mo
Xi Wang
ArXivPDFHTML
Abstract

We introduce a novel task of generating realistic and diverse 3D hand trajectories given a single image of an object, which could be involved in a hand-object interaction scene or pictured by itself. When humans grasp an object, appropriate trajectories naturally form in our minds to use it for specific tasks. Hand-object interaction trajectory priors can greatly benefit applications in robotics, embodied AI, augmented reality and related fields. However, synthesizing realistic and appropriate hand trajectories given a single object or hand-object interaction image is a highly ambiguous task, requiring to correctly identify the object of interest and possibly even the correct interaction among many possible alternatives. To tackle this challenging problem, we propose the SIGHT-Fusion system, consisting of a curated pipeline for extracting visual features of hand-object interaction details from egocentric videos involving object manipulation, and a diffusion-based conditional motion generation model processing the extracted features. We train our method given video data with corresponding hand trajectory annotations, without supervision in the form of action labels. For the evaluation, we establish benchmarks utilizing the first-person FPHAB and HOI4D datasets, testing our method against various baselines and using multiple metrics. We also introduce task simulators for executing the generated hand trajectories and reporting task success rates as an additional metric. Experiments show that our method generates more appropriate and realistic hand trajectories than baselines and presents promising generalization capability on unseen objects. The accuracy of the generated hand trajectories is confirmed in a physics simulation setting, showcasing the authenticity of the created sequences and their applicability in downstream uses.

View on arXiv
@article{gavryushin2025_2503.22869,
  title={ SIGHT: Single-Image Conditioned Generation of Hand Trajectories for Hand-Object Interaction },
  author={ Alexey Gavryushin and Florian Redhardt and Gaia Di Lorenzo and Luc Van Gool and Marc Pollefeys and Kaichun Mo and Xi Wang },
  journal={arXiv preprint arXiv:2503.22869},
  year={ 2025 }
}
Comments on this paper