ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.23965
36
0

Video-based Traffic Light Recognition by Rockchip RV1126 for Autonomous Driving

31 March 2025
Miao Fan
Xuxu Kong
Shengtong Xu
Haoyi Xiong
Xiangzeng Liu
    ViT
ArXivPDFHTML
Abstract

Real-time traffic light recognition is fundamental for autonomous driving safety and navigation in urban environments. While existing approaches rely on single-frame analysis from onboard cameras, they struggle with complex scenarios involving occlusions and adverse lighting conditions. We present \textit{ViTLR}, a novel video-based end-to-end neural network that processes multiple consecutive frames to achieve robust traffic light detection and state classification. The architecture leverages a transformer-like design with convolutional self-attention modules, which is optimized specifically for deployment on the Rockchip RV1126 embedded platform. Extensive evaluations on two real-world datasets demonstrate that \textit{ViTLR} achieves state-of-the-art performance while maintaining real-time processing capabilities (>25 FPS) on RV1126's NPU. The system shows superior robustness across temporal stability, varying target distances, and challenging environmental conditions compared to existing single-frame approaches. We have successfully integrated \textit{ViTLR} into an ego-lane traffic light recognition system using HD maps for autonomous driving applications. The complete implementation, including source code and datasets, is made publicly available to facilitate further research in this domain.

View on arXiv
@article{fan2025_2503.23965,
  title={ Video-based Traffic Light Recognition by Rockchip RV1126 for Autonomous Driving },
  author={ Miao Fan and Xuxu Kong and Shengtong Xu and Haoyi Xiong and Xiangzeng Liu },
  journal={arXiv preprint arXiv:2503.23965},
  year={ 2025 }
}
Comments on this paper