ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.01867
26
4

MoLA: Motion Generation and Editing with Latent Diffusion Enhanced by Adversarial Training

4 June 2024
Kengo Uchida
Takashi Shibuya
Yuhta Takida
Naoki Murata
Shusuke Takahashi
Shusuke Takahashi
Yuki Mitsufuji
    VGen
ArXivPDFHTML
Abstract

In text-to-motion generation, controllability as well as generation quality and speed has become increasingly critical. The controllability challenges include generating a motion of a length that matches the given textual description and editing the generated motions according to control signals, such as the start-end positions and the pelvis trajectory. In this paper, we propose MoLA, which provides fast, high-quality, variable-length motion generation and can also deal with multiple editing tasks in a single framework. Our approach revisits the motion representation used as inputs and outputs in the model, incorporating an activation variable to enable variable-length motion generation. Additionally, we integrate a variational autoencoder and a latent diffusion model, further enhanced through adversarial training, to achieve high-quality and fast generation. Moreover, we apply a training-free guided generation framework to achieve various editing tasks with motion control inputs. We quantitatively show the effectiveness of adversarial learning in text-to-motion generation, and demonstrate the applicability of our editing framework to multiple editing tasks in the motion domain.

View on arXiv
@article{uchida2025_2406.01867,
  title={ MoLA: Motion Generation and Editing with Latent Diffusion Enhanced by Adversarial Training },
  author={ Kengo Uchida and Takashi Shibuya and Yuhta Takida and Naoki Murata and Julian Tanke and Shusuke Takahashi and Yuki Mitsufuji },
  journal={arXiv preprint arXiv:2406.01867},
  year={ 2025 }
}
Comments on this paper