ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.03485
  4. Cited By
LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model

LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model

International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 2024
6 May 2024
Haowen Sun
Ruikun Zheng
Haibin Huang
Chongyang Ma
Hui Huang
Ruizhen Hu
    DiffM
ArXiv (abs)PDFHTMLGithub (53★)

Papers citing "LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model"

9 / 9 papers shown
Title
Step2Motion: Locomotion Reconstruction from Pressure Sensing Insoles
Step2Motion: Locomotion Reconstruction from Pressure Sensing Insoles
J. L. Pontón
Eduardo Alvarado
Lin Geng Foo
N. Pelechano
Carlos Andújar
Marc Habermann
92
0
0
26 Oct 2025
Spatial-Temporal Multi-Scale Quantization for Flexible Motion Generation
Spatial-Temporal Multi-Scale Quantization for Flexible Motion Generation
Zan Wang
Jingze Zhang
Yixin Chen
Baoxiong Jia
Wei Liang
Siyuan Huang
MQ
142
1
0
12 Aug 2025
PP-Motion: Physical-Perceptual Fidelity Evaluation for Human Motion Generation
PP-Motion: Physical-Perceptual Fidelity Evaluation for Human Motion Generation
Sihan Zhao
Zixuan Wang
Tianyu Luan
Jia Jia
Wentao Zhu
Jiebo Luo
Junsong Yuan
Nan Xi
EGVM
205
0
0
11 Aug 2025
Toward Rich Video Human-Motion2D Generation
Toward Rich Video Human-Motion2D Generation
Ruihao Xi
Xuekuan Wang
Yongcheng Li
Shuhua Li
Zichen Wang
Yiwei Wang
Feng Wei
Cairong Zhao
VGen
190
0
0
17 Jun 2025
Multi-Person Interaction Generation from Two-Person Motion Priors
Multi-Person Interaction Generation from Two-Person Motion Priors
Wenning Xu
Shiyu Fan
Paul Henderson
Edmond S. L. Ho
DiffM
1.0K
2
0
23 May 2025
MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm
MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm
Ziyan Guo
Zeyu Hu
De Wen Soh
Na Zhao
VGen
641
12
0
04 Feb 2025
BiPO: Bidirectional Partial Occlusion Network for Text-to-Motion Synthesis
BiPO: Bidirectional Partial Occlusion Network for Text-to-Motion Synthesis
Seong-Eun Hong
Soobin Lim
Juyeong Hwang
Minwook Chang
Hyeongyeop Kang
570
4
0
28 Nov 2024
KinMo: Kinematic-aware Human Motion Understanding and Generation
KinMo: Kinematic-aware Human Motion Understanding and Generation
Pengfei Zhang
Pinxin Liu
Hyeongwoo Kim
Pablo Garrido
Bindita Chaudhuri
514
6
0
23 Nov 2024
EvAlignUX: Advancing UX Research through LLM-Supported Exploration of
  Evaluation Metrics
EvAlignUX: Advancing UX Research through LLM-Supported Exploration of Evaluation Metrics
Qingxiao Zheng
Minrui Chen
Pranav Sharma
Yiliu Tang
Mehul Oswal
Yiren Liu
Yun Huang
ELM
121
2
0
23 Sep 2024
1