58
6

LayerTracer: Cognitive-Aligned Layered SVG Synthesis via Diffusion Transformer

Abstract

Generating cognitive-aligned layered SVGs remains challenging due to existing methods' tendencies toward either oversimplified single-layer outputs or optimization-induced shape redundancies. We propose LayerTracer, a diffusion transformer based framework that bridges this gap by learning designers' layered SVG creation processes from a novel dataset of sequential design operations. Our approach operates in two phases: First, a text-conditioned DiT generates multi-phase rasterized construction blueprints that simulate human design workflows. Second, layer-wise vectorization with path deduplication produces clean, editable SVGs. For image vectorization, we introduce a conditional diffusion mechanism that encodes reference images into latent tokens, guiding hierarchical reconstruction while preserving structural integrity. Extensive experiments demonstrate LayerTracer's superior performance against optimization-based and neural baselines in both generation quality and editability, effectively aligning AI-generated vectors with professional design cognition.

View on arXiv
@article{song2025_2502.01105,
  title={ LayerTracer: Cognitive-Aligned Layered SVG Synthesis via Diffusion Transformer },
  author={ Yiren Song and Danze Chen and Mike Zheng Shou },
  journal={arXiv preprint arXiv:2502.01105},
  year={ 2025 }
}
Comments on this paper