48
0

Explaining Domain Shifts in Language: Concept erasing for Interpretable Image Classification

Abstract

Concept-based models can map black-box representations to human-understandable concepts, which makes the decision-making process more transparent and then allows users to understand the reason behind predictions. However, domain-specific concepts often impact the final predictions, which subsequently undermine the model generalization capabilities, and prevent the model from being used in high-stake applications. In this paper, we propose a novel Language-guided Concept-Erasing (LanCE) framework. In particular, we empirically demonstrate that pre-trained vision-language models (VLMs) can approximate distinct visual domain shifts via domain descriptors while prompting large Language Models (LLMs) can easily simulate a wide range of descriptors of unseen visual domains. Then, we introduce a novel plug-in domain descriptor orthogonality (DDO) regularizer to mitigate the impact of these domain-specific concepts on the final predictions. Notably, the DDO regularizer is agnostic to the design of concept-based models and we integrate it into several prevailing models. Through evaluation of domain generalization on four standard benchmarks and three newly introduced benchmarks, we demonstrate that DDO can significantly improve the out-of-distribution (OOD) generalization over the previous state-of-the-art concept-basedthis http URLcode is available atthis https URL.

View on arXiv
@article{zeng2025_2503.18483,
  title={ Explaining Domain Shifts in Language: Concept erasing for Interpretable Image Classification },
  author={ Zequn Zeng and Yudi Su and Jianqiao Sun and Tiansheng Wen and Hao Zhang and Zhengjue Wang and Bo Chen and Hongwei Liu and Jiawei Ma },
  journal={arXiv preprint arXiv:2503.18483},
  year={ 2025 }
}
Comments on this paper