ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.01442
143
0
v1v2 (latest)

Physically-based Lighting Augmentation for Robotic Manipulation

2 August 2025
Shutong Jin
Lezhong Wang
Ben Temming
Florian T. Pokorny
ArXiv (abs)PDFHTML
Main:6 Pages
12 Figures
Bibliography:3 Pages
4 Tables
Abstract

Despite advances in data augmentation, policies trained via imitation learning still struggle to generalize across environmental variations such as lighting changes. To address this, we propose the first framework that leverages physically-based inverse rendering for lighting augmentation on real-world human demonstrations. Specifically, inverse rendering decomposes the first frame in each demonstration into geometric (surface normal, depth) and material (albedo, roughness, metallic) properties, which are then used to render appearance changes under different lighting. To ensure consistent augmentation across each demonstration, we fine-tune Stable Video Diffusion on robot execution videos for temporal lighting propagation. We evaluate our framework by measuring the structural and temporal consistency of the augmented sequences, and by assessing its effectiveness in reducing the behavior cloning generalization gap (40.1%) on a 7-DoF robot across 6 lighting conditions using 720 real-world evaluations. We further showcase three downstream applications enabled by the proposed framework.

View on arXiv
Comments on this paper