We propose a two-stage framework for motion in-betweening that combines diffusion-based motion generation with physics-based character adaptation. In Stage 1, a character-agnostic diffusion model synthesizes transitions from sparse keyframes on a canonical skeleton, allowing the same model to generalize across diverse characters. In Stage 2, a reinforcement learning-based controller adapts the canonical motion to the target character's morphology and dynamics, correcting artifacts and enhancing stylistic realism. This design supports scalable motion generation across characters with diverse skeletons without retraining the entire model. Experiments on standard benchmarks and stylized characters demonstrate that our method produces physically plausible, style-consistent motions under sparse and long-range constraints.
View on arXiv@article{qin2025_2504.09413, title={ Scalable Motion In-betweening via Diffusion and Physics-Based Character Adaptation }, author={ Jia Qin }, journal={arXiv preprint arXiv:2504.09413}, year={ 2025 } }