Expanding pre-trained zero-shot counting models to handle unseen categories requires more than simply adding new prompts, as this approach does not achieve the necessary alignment between text and visual features for accurate counting. We introduce RichCount, the first framework to address these limitations, employing a two-stage training strategy that enhances text encoding and strengthens the model's association with objects in images. RichCount improves zero-shot counting for unseen categories through two key objectives: (1) enriching text features with a feed-forward network and adapter trained on text-image similarity, thereby creating robust, aligned representations; and (2) applying this refined encoder to counting tasks, enabling effective generalization across diverse prompts and complex images. In this manner, RichCount goes beyond simple prompt expansion to establish meaningful feature alignment that supports accurate counting across novel categories. Extensive experiments on three benchmark datasets demonstrate the effectiveness of RichCount, achieving state-of-the-art performance in zero-shot counting and significantly enhancing generalization to unseen categories in open-world scenarios.
View on arXiv@article{zhu2025_2505.15398, title={ Expanding Zero-Shot Object Counting with Rich Prompts }, author={ Huilin Zhu and Senyao Li and Jingling Yuan and Zhengwei Yang and Yu Guo and Wenxuan Liu and Xian Zhong and Shengfeng He }, journal={arXiv preprint arXiv:2505.15398}, year={ 2025 } }