31
0

Joint Pedestrian and Vehicle Traffic Optimization in Urban Environments using Reinforcement Learning

Abstract

Reinforcement learning (RL) holds significant promise for adaptive traffic signal control. While existing RL-based methods demonstrate effectiveness in reducing vehicular congestion, their predominant focus on vehicle-centric optimization leaves pedestrian mobility needs and safety challenges unaddressed. In this paper, we present a deep RL framework for adaptive control of eight traffic signals along a real-world urban corridor, jointly optimizing both pedestrian and vehicular efficiency. Our single-agent policy is trained using real-world pedestrian and vehicle demand data derived from Wi-Fi logs and video analysis. The results demonstrate significant performance improvements over traditional fixed-time signals, reducing average wait times per pedestrian and per vehicle by up to 67% and 52%, respectively, while simultaneously decreasing total accumulated wait times for both groups by up to 67% and 53%. Additionally, our results demonstrate generalization capabilities across varying traffic demands, including conditions entirely unseen during training, validating RL's potential for developing transportation systems that serve all road users.

View on arXiv
@article{poudel2025_2504.05018,
  title={ Joint Pedestrian and Vehicle Traffic Optimization in Urban Environments using Reinforcement Learning },
  author={ Bibek Poudel and Xuan Wang and Weizi Li and Lei Zhu and Kevin Heaslip },
  journal={arXiv preprint arXiv:2504.05018},
  year={ 2025 }
}
Comments on this paper