ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06357
21
0

From Broadcast to Minimap: Achieving State-of-the-Art SoccerNet Game State Reconstruction

8 April 2025
V. Golovkin
Nikolay Nemtsev
Vasyl Shandyba
Oleg Udin
Nikita Kasatkin
Pavel Kononov
Anton Afanasiev
Sergey Ulasen
Andrei Boiarov
ArXivPDFHTML
Abstract

Game State Reconstruction (GSR), a critical task in Sports Video Understanding, involves precise tracking and localization of all individuals on the football field-players, goalkeepers, referees, and others - in real-world coordinates. This capability enables coaches and analysts to derive actionable insights into player movements, team formations, and game dynamics, ultimately optimizing training strategies and enhancing competitive advantage. Achieving accurate GSR using a single-camera setup is highly challenging due to frequent camera movements, occlusions, and dynamic scene content. In this work, we present a robust end-to-end pipeline for tracking players across an entire match using a single-camera setup. Our solution integrates a fine-tuned YOLOv5m for object detection, a SegFormer-based camera parameter estimator, and a DeepSORT-based tracking framework enhanced with re-identification, orientation prediction, and jersey number recognition. By ensuring both spatial accuracy and temporal consistency, our method delivers state-of-the-art game state reconstruction, securing first place in the SoccerNet Game State Reconstruction Challenge 2024 and significantly outperforming competing methods.

View on arXiv
@article{golovkin2025_2504.06357,
  title={ From Broadcast to Minimap: Achieving State-of-the-Art SoccerNet Game State Reconstruction },
  author={ Vladimir Golovkin and Nikolay Nemtsev and Vasyl Shandyba and Oleg Udin and Nikita Kasatkin and Pavel Kononov and Anton Afanasiev and Sergey Ulasen and Andrei Boiarov },
  journal={arXiv preprint arXiv:2504.06357},
  year={ 2025 }
}
Comments on this paper