Multi-Mapping Image-to-Image Translation with Central Biasing
Normalization
- DiffM
Recently, image-to-image translation tasks have been attempting to extend the model from one-to-one mapping to multiple mappings by injecting latent code. Through the analysis of the existing latent code injection models, we find that latent code can determine the target mapping of a generator by controlling the output statistical properties, especially the mean value. However, we find that in some cases the normalization will reduce the consistency of same mapping or the diversity of different mappings. After mathematical analysis, we find the reason behind that is that the distributions of same mapping become inconsistent after batch normalization, and that the effects of latent code are eliminated after instance normalization. To solve these problems, we propose the consistency within diversity design criteria for multi-mapping networks. Based on the design criteria, we propose central biasing normalization to replace existing latent code injection. Experiments show that our method outperforms current state-of-the-art methods. Code and pretrained models are available at https://github.com/Xiaoming-Yu/cbn .
View on arXiv