Exemplar-Free Counting aims to count objects of interest without intensive annotations of objects or exemplars. To achieve this, we propose a Gated Context-Aware Swin-UNet (GCA-SUNet) to directly map an input image to the density map of countable objects. Specifically, a set of Swin transformers form an encoder to derive a robust feature representation, and a Gated Context-Aware Modulation block is designed to suppress irrelevant objects or background through a gate mechanism and exploit the attentive support of objects of interest through a self-similarity matrix. The gate strategy is also incorporated into the bottleneck network and the decoder of the Swin-UNet to highlight the features most relevant to objects of interest. By explicitly exploiting the attentive support among countable objects and eliminating irrelevant features through the gate mechanisms, the proposed GCA-SUNet focuses on and counts objects of interest without relying on predefined categories or exemplars. Experimental results on the real-world datasets such as FSC-147 and CARPK demonstrate that GCA-SUNet significantly and consistently outperforms state-of-the-art methods. The code is available atthis https URL.
View on arXiv@article{wu2025_2409.12249, title={ GCA-SUNet: A Gated Context-Aware Swin-UNet for Exemplar-Free Counting }, author={ Yuzhe Wu and Yipeng Xu and Tianyu Xu and Jialu Zhang and Jianfeng Ren and Xudong Jiang }, journal={arXiv preprint arXiv:2409.12249}, year={ 2025 } }