ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12168
54
0

Learning Extremely High Density Crowds as Active Matters

15 March 2025
Feixiang He
Jiangbei Yue
Jialin Zhu
Armin Seyfried
Dan Casas
Julien Pettré
He-Nan Wang
ArXivPDFHTML
Abstract

Video-based high-density crowd analysis and prediction has been a long-standing topic in computer vision. It is notoriously difficult due to, but not limited to, the lack of high-quality data and complex crowd dynamics. Consequently, it has been relatively under studied. In this paper, we propose a new approach that aims to learn from in-the-wild videos, often with low quality where it is difficult to track individuals or count heads. The key novelty is a new physics prior to model crowd dynamics. We model high-density crowds as active matter, a continumm with active particles subject to stochastic forces, named 'crowd material'. Our physics model is combined with neural networks, resulting in a neural stochastic differential equation system which can mimic the complex crowd dynamics. Due to the lack of similar research, we adapt a range of existing methods which are close to ours for comparison. Through exhaustive evaluation, we show our model outperforms existing methods in analyzing and forecasting extremely high-density crowds. Furthermore, since our model is a continuous-time physics model, it can be used for simulation and analysis, providing strong interpretability. This is categorically different from most deep learning methods, which are discrete-time models and black-boxes.

View on arXiv
@article{he2025_2503.12168,
  title={ Learning Extremely High Density Crowds as Active Matters },
  author={ Feixiang He and Jiangbei Yue and Jialin Zhu and Armin Seyfried and Dan Casas and Julien Pettré and He Wang },
  journal={arXiv preprint arXiv:2503.12168},
  year={ 2025 }
}
Comments on this paper