42
0

AnimatePainter: A Self-Supervised Rendering Framework for Reconstructing Painting Process

Abstract

Humans can intuitively decompose an image into a sequence of strokes to create a painting, yet existing methods for generating drawing processes are limited to specific data types and often rely on expensive human-annotated datasets. We propose a novel self-supervised framework for generating drawing processes from any type of image, treating the task as a video generation problem. Our approach reverses the drawing process by progressively removing strokes from a reference image, simulating a human-like creation sequence. Crucially, our method does not require costly datasets of real human drawing processes; instead, we leverage depth estimation and stroke rendering to construct a self-supervised dataset. We model human drawings as "refinement" and "layering" processes and introduce depth fusion layers to enable video generation models to learn and replicate human drawing behavior. Extensive experiments validate the effectiveness of our approach, demonstrating its ability to generate realistic drawings without the need for real drawing process data.

View on arXiv
@article{hu2025_2503.17029,
  title={ AnimatePainter: A Self-Supervised Rendering Framework for Reconstructing Painting Process },
  author={ Junjie Hu and Shuyong Gao and Qianyu Guo and Yan Wang and Qishan Wang and Yuang Feng and Wenqiang Zhang },
  journal={arXiv preprint arXiv:2503.17029},
  year={ 2025 }
}
Comments on this paper