End-to-end autonomous driving solutions, which process multi-modal sensory data to directly generate refined control commands, have become a dominant paradigm in autonomous driving research. However, these approaches predominantly depend on single-vehicle data collection for model training and optimization, resulting in significant challenges such as high data acquisition and annotation costs, the scarcity of critical driving scenarios, and fragmented datasets that impede model generalization. To mitigate these limitations, we introduce RS2AD, a novel framework for reconstructing and synthesizing vehicle-mounted LiDAR data from roadside sensor observations. Specifically, our method transforms roadside LiDAR point clouds into the vehicle-mounted LiDAR coordinate system by leveraging the target vehicle's relative pose. Subsequently, high-fidelity vehicle-mounted LiDAR data is synthesized through virtual LiDAR modeling, point cloud classification, and resampling techniques. To the best of our knowledge, this is the first approach to reconstruct vehicle-mounted LiDAR data from roadside sensor inputs. Extensive experimental evaluations demonstrate that incorporating the data generated by the RS2AD method (the RS2V-L dataset) into model training as a supplement to the KITTI dataset can significantly enhance the accuracy of 3D object detection and greatly improve the efficiency of end-to-end autonomous driving data generation. These findings strongly validate the effectiveness of the proposed method and underscore its potential in reducing dependence on costly vehicle-mounted data collection while improving the robustness of autonomous driving models.
View on arXiv@article{xing2025_2503.07085, title={ RS2AD: End-to-End Autonomous Driving Data Generation from Roadside Sensor Observations }, author={ Ruidan Xing and Runyi Huang and Qing Xu and Lei He }, journal={arXiv preprint arXiv:2503.07085}, year={ 2025 } }