ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03011
20
0

Comprehensive Relighting: Generalizable and Consistent Monocular Human Relighting and Harmonization

3 April 2025
J. Wang
Jingyuan Liu
Xin Sun
Krishna Kumar Singh
Zhixin Shu
He Zhang
Jimei Yang
Nanxuan Zhao
Tuanfeng Y. Wang
Simon Chen
Ulrich Neumann
Jae Shin Yoon
ArXivPDFHTML
Abstract

This paper introduces Comprehensive Relighting, the first all-in-one approach that can both control and harmonize the lighting from an image or video of humans with arbitrary body parts from any scene. Building such a generalizable model is extremely challenging due to the lack of dataset, restricting existing image-based relighting models to a specific scenario (e.g., face or static human). To address this challenge, we repurpose a pre-trained diffusion model as a general image prior and jointly model the human relighting and background harmonization in the coarse-to-fine framework. To further enhance the temporal coherence of the relighting, we introduce an unsupervised temporal lighting model that learns the lighting cycle consistency from many real-world videos without any ground truth. In inference time, our temporal lighting module is combined with the diffusion models through the spatio-temporal feature blending algorithms without extra training; and we apply a new guided refinement as a post-processing to preserve the high-frequency details from the input image. In the experiments, Comprehensive Relighting shows a strong generalizability and lighting temporal coherence, outperforming existing image-based human relighting and harmonization methods.

View on arXiv
@article{wang2025_2504.03011,
  title={ Comprehensive Relighting: Generalizable and Consistent Monocular Human Relighting and Harmonization },
  author={ Junying Wang and Jingyuan Liu and Xin Sun and Krishna Kumar Singh and Zhixin Shu and He Zhang and Jimei Yang and Nanxuan Zhao and Tuanfeng Y. Wang and Simon S. Chen and Ulrich Neumann and Jae Shin Yoon },
  journal={arXiv preprint arXiv:2504.03011},
  year={ 2025 }
}
Comments on this paper