Transformer-Based Dual-Optical Attention Fusion Crowd Head Point Counting and Localization Network

In this paper, the dual-optical attention fusion crowd head point counting model (TAPNet) is proposed to address the problem of the difficulty of accurate counting in complex scenes such as crowd dense occlusion and low light in crowd counting tasks under UAV view. The model designs a dual-optical attention fusion module (DAFP) by introducing complementary information from infrared images to improve the accuracy and robustness of all-day crowd counting. In order to fully utilize different modal information and solve the problem of inaccurate localization caused by systematic misalignment between image pairs, this paper also proposes an adaptive two-optical feature decomposition fusion module (AFDF). In addition, we optimize the training strategy to improve the model robustness through spatial random offset data augmentation. Experiments on two challenging public datasets, DroneRGBT and GAIIC2, show that the proposed method outperforms existing techniques in terms of performance, especially in challenging dense low-light scenes. Code is available atthis https URL
View on arXiv@article{zhou2025_2505.06937, title={ Transformer-Based Dual-Optical Attention Fusion Crowd Head Point Counting and Localization Network }, author={ Fei Zhou and Yi Li and Mingqing Zhu }, journal={arXiv preprint arXiv:2505.06937}, year={ 2025 } }