ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12935
50
0

L2HCount:Generalizing Crowd Counting from Low to High Crowd Density via Density Simulation

17 March 2025
Guoliang Xu
Jianqin Yin
Ren Zhang
Yonghao Dang
Feng Zhou
Bo Yu
ArXivPDFHTML
Abstract

Since COVID-19, crowd-counting tasks have gained wide applications. While supervised methods are reliable, annotation is more challenging in high-density scenes due to small head sizes and severe occlusion, whereas it's simpler in low-density scenes. Interestingly, can we train the model in low-density scenes and generalize it to high-density scenes? Therefore, we propose a low- to high-density generalization framework (L2HCount) that learns the pattern related to high-density scenes from low-density ones, enabling it to generalize well to high-density scenes. Specifically, we first introduce a High-Density Simulation Module and a Ground-Truth Generation Module to construct fake high-density images along with their corresponding ground-truth crowd annotations respectively by image-shifting technique, effectively simulating high-density crowd patterns. However, the simulated images have two issues: image blurring and loss of low-density image characteristics. Therefore, we second propose a Head Feature Enhancement Module to extract clear features in the simulated high-density scene. Third, we propose a Dual-Density Memory Encoding Module that uses two crowd memories to learn scene-specific patterns from low- and simulated high-density scenes, respectively. Extensive experiments on four challenging datasets have shown the promising performance of L2HCount.

View on arXiv
@article{xu2025_2503.12935,
  title={ L2HCount:Generalizing Crowd Counting from Low to High Crowd Density via Density Simulation },
  author={ Guoliang Xu and Jianqin Yin and Ren Zhang and Yonghao Dang and Feng Zhou and Bo Yu },
  journal={arXiv preprint arXiv:2503.12935},
  year={ 2025 }
}
Comments on this paper