ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.08259
27
0

CoProSketch: Controllable and Progressive Sketch Generation with Diffusion Model

11 April 2025
Ruohao Zhan
Yijin Li
Yisheng He
Shuo Chen
Yichen Shen
Xinyu Chen
Zilong Dong
Zhaoyang Huang
Guofeng Zhang
    DiffM
ArXivPDFHTML
Abstract

Sketches serve as fundamental blueprints in artistic creation because sketch editing is easier and more intuitive than pixel-level RGB image editing for painting artists, yet sketch generation remains unexplored despite advancements in generative models. We propose a novel framework CoProSketch, providing prominent controllability and details for sketch generation with diffusion models. A straightforward method is fine-tuning a pretrained image generation diffusion model with binarized sketch images. However, we find that the diffusion models fail to generate clear binary images, which makes the produced sketches chaotic. We thus propose to represent the sketches by unsigned distance field (UDF), which is continuous and can be easily decoded to sketches through a lightweight network. With CoProSketch, users generate a rough sketch from a bounding box and a text prompt. The rough sketch can be manually edited and fed back into the model for iterative refinement and will be decoded to a detailed sketch as the final result. Additionally, we curate the first large-scale text-sketch paired dataset as the training data. Experiments demonstrate superior semantic consistency and controllability over baselines, offering a practical solution for integrating user feedback into generative workflows.

View on arXiv
@article{zhan2025_2504.08259,
  title={ CoProSketch: Controllable and Progressive Sketch Generation with Diffusion Model },
  author={ Ruohao Zhan and Yijin Li and Yisheng He and Shuo Chen and Yichen Shen and Xinyu Chen and Zilong Dong and Zhaoyang Huang and Guofeng Zhang },
  journal={arXiv preprint arXiv:2504.08259},
  year={ 2025 }
}
Comments on this paper