16
0

Semantic-Space-Intervened Diffusive Alignment for Visual Classification

Abstract

Cross-modal alignment is an effective approach to improving visual classification. Existing studies typically enforce a one-step mapping that uses deep neural networks to project the visual features to mimic the distribution of textual features. However, they typically face difficulties in finding such a projection due to the two modalities in both the distribution of class-wise samples and the range of their feature values. To address this issue, this paper proposes a novel Semantic-Space-Intervened Diffusive Alignment method, termed SeDA, models a semantic space as a bridge in the visual-to-textual projection, considering both types of features share the same class-level information in classification. More importantly, a bi-stage diffusion framework is developed to enable the progressive alignment between the two modalities. Specifically, SeDA first employs a Diffusion-Controlled Semantic Learner to model the semantic features space of visual features by constraining the interactive features of the diffusion model and the category centers of visual features. In the later stage of SeDA, the Diffusion-Controlled Semantic Translator focuses on learning the distribution of textual features from the semantic space. Meanwhile, the Progressive Feature Interaction Network introduces stepwise feature interactions at each alignment step, progressively integrating textual information into mapped features. Experimental results show that SeDA achieves stronger cross-modal feature alignment, leading to superior performance over existing methods across multiple scenarios.

View on arXiv
@article{li2025_2505.05721,
  title={ Semantic-Space-Intervened Diffusive Alignment for Visual Classification },
  author={ Zixuan Li and Lei Meng and Guoqing Chao and Wei Wu and Xiaoshuo Yan and Yimeng Yang and Zhuang Qi and Xiangxu Meng },
  journal={arXiv preprint arXiv:2505.05721},
  year={ 2025 }
}
Comments on this paper