No Other Representation Component Is Needed: Diffusion Transformers Can Provide Representation Guidance by Themselves

Recent studies have demonstrated that learning a meaningful internal representation can both accelerate generative training and enhance generation quality of the diffusion transformers. However, existing approaches necessitate to either introduce an additional and complex representation training framework or rely on a large-scale, pre-trained representation foundation model to provide representation guidance during the original generative training process. In this study, we posit that the unique discriminative process inherent to diffusion transformers enables them to offer such guidance without requiring external representation components. We therefore propose Self-Representation Alignment (SRA), a simple yet straightforward method that obtain representation guidance through a self-distillation manner. Specifically, SRA aligns the output latent representation of the diffusion transformer in earlier layer with higher noise to that in later layer with lower noise to progressively enhance the overall representation learning during only generative training process. Experimental results indicate that applying SRA to DiTs and SiTs yields consistent performance improvements. Moreover, SRA not only significantly outperforms approaches relying on auxiliary, complex representation training frameworks but also achieves performance comparable to methods that heavily dependent on powerful external representation priors.
View on arXiv@article{jiang2025_2505.02831, title={ No Other Representation Component Is Needed: Diffusion Transformers Can Provide Representation Guidance by Themselves }, author={ Dengyang Jiang and Mengmeng Wang and Liuzhuozheng Li and Lei Zhang and Haoyu Wang and Wei Wei and Guang Dai and Yanning Zhang and Jingdong Wang }, journal={arXiv preprint arXiv:2505.02831}, year={ 2025 } }