38
0

DualContrast: Unsupervised Disentangling of Content and Transformations with Implicit Parameterization

Abstract

Unsupervised disentanglement of content and transformation is significantly important for analyzing shape-focused scientific image datasets, given their efficacy in solving downstream image-based shape-analyses tasks. The existing relevant works address the problem by explicitly parameterizing the transformation latent codes in a generative model, significantly reducing their expressiveness. Moreover, they are not applicable in cases where transformations can not be readily parametrized. An alternative to such explicit approaches is contrastive methods with data augmentation, which implicitly disentangles transformations and content. However, the existing contrastive strategies are insufficient to this end. Therefore, we developed a novel contrastive method with generative modeling, DualContrast, specifically for unsupervised disentanglement of content and transformations in shape-focused image datasets. DualContrast creates positive and negative pairs for content and transformation from data and latent spaces. Our extensive experiments showcase the efficacy of DualContrast over existing self-supervised and explicit parameterization approaches. With DualContrast, we disentangled protein composition and conformations in cellular 3D protein images, which was unattainable with existing disentanglement approaches

View on arXiv
@article{uddin2025_2405.16796,
  title={ DualContrast: Unsupervised Disentangling of Content and Transformations with Implicit Parameterization },
  author={ Mostofa Rafid Uddin and Min Xu },
  journal={arXiv preprint arXiv:2405.16796},
  year={ 2025 }
}
Comments on this paper