ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10716
28
0

SpinMeRound: Consistent Multi-View Identity Generation Using Diffusion Models

14 April 2025
Stathis Galanakis
Alexandros Lattas
Stylianos Moschoglou
Bernhard Kainz
S. Zafeiriou
    DiffM
ArXivPDFHTML
Abstract

Despite recent progress in diffusion models, generating realistic head portraits from novel viewpoints remains a significant challenge. Most current approaches are constrained to limited angular ranges, predominantly focusing on frontal or near-frontal views. Moreover, although the recent emerging large-scale diffusion models have been proven robust in handling 3D scenes, they underperform on facial data, given their complex structure and the uncanny valley pitfalls. In this paper, we propose SpinMeRound, a diffusion-based approach designed to generate consistent and accurate head portraits from novel viewpoints. By leveraging a number of input views alongside an identity embedding, our method effectively synthesizes diverse viewpoints of a subject whilst robustly maintaining its unique identity features. Through experimentation, we showcase our model's generation capabilities in 360 head synthesis, while beating current state-of-the-art multiview diffusion models.

View on arXiv
@article{galanakis2025_2504.10716,
  title={ SpinMeRound: Consistent Multi-View Identity Generation Using Diffusion Models },
  author={ Stathis Galanakis and Alexandros Lattas and Stylianos Moschoglou and Bernhard Kainz and Stefanos Zafeiriou },
  journal={arXiv preprint arXiv:2504.10716},
  year={ 2025 }
}
Comments on this paper