ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.10678
158
1
v1v2 (latest)

T2Bs: Text-to-Character Blendshapes via Video Generation

12 September 2025
Jiahao Luo
Chaoyang Wang
Michael Vasilkovsky
V. Shakhrai
Di Liu
Peiye Zhuang
Sergey Tulyakov
Peter Wonka
Hsin-Ying Lee
James Davis
Jian Wang
    DiffM
ArXiv (abs)PDFHTML
Main:9 Pages
14 Figures
4 Tables
Appendix:6 Pages
Abstract

We present T2Bs, a framework for generating high-quality, animatable character head morphable models from text by combining static text-to-3D generation with video diffusion. Text-to-3D models produce detailed static geometry but lack motion synthesis, while video diffusion models generate motion with temporal and multi-view geometric inconsistencies. T2Bs bridges this gap by leveraging deformable 3D Gaussian splatting to align static 3D assets with video outputs. By constraining motion with static geometry and employing a view-dependent deformation MLP, T2Bs (i) outperforms existing 4D generation methods in accuracy and expressiveness while reducing video artifacts and view inconsistencies, and (ii) reconstructs smooth, coherent, fully registered 3D geometries designed to scale for building morphable models with diverse, realistic facial motions. This enables synthesizing expressive, animatable character heads that surpass current 4D generation techniques.

View on arXiv
Comments on this paper