ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16973
34
0

ARFlow: Human Action-Reaction Flow Matching with Physical Guidance

21 March 2025
Wentao Jiang
Jingya Wang
Haotao Lu
Kaiyang Ji
Baoxiong Jia
Siyuan Huang
Ye-ling Shi
ArXivPDFHTML
Abstract

Human action-reaction synthesis, a fundamental challenge in modeling causal human interactions, plays a critical role in applications ranging from virtual reality to social robotics. While diffusion-based models have demonstrated promising performance, they exhibit two key limitations for interaction synthesis: reliance on complex noise-to-reaction generators with intricate conditional mechanisms, and frequent physical violations in generated motions. To address these issues, we propose Action-Reaction Flow Matching (ARFlow), a novel framework that establishes direct action-to-reaction mappings, eliminating the need for complex conditional mechanisms. Our approach introduces two key innovations: an x1-prediction method that directly outputs human motions instead of velocity fields, enabling explicit constraint enforcement; and a training-free, gradient-based physical guidance mechanism that effectively prevents body penetration artifacts during sampling. Extensive experiments on NTU120 and Chi3D datasets demonstrate that ARFlow not only outperforms existing methods in terms of Fréchet Inception Distance and motion diversity but also significantly reduces body collisions, as measured by our new Intersection Volume and Intersection Frequency metrics.

View on arXiv
@article{jiang2025_2503.16973,
  title={ ARFlow: Human Action-Reaction Flow Matching with Physical Guidance },
  author={ Wentao Jiang and Jingya Wang and Haotao Lu and Kaiyang Ji and Baoxiong Jia and Siyuan Huang and Ye Shi },
  journal={arXiv preprint arXiv:2503.16973},
  year={ 2025 }
}
Comments on this paper