One of the challenges to reduce the gap between the machine and the human level driving is how to endow the system with the learning capacity to deal with the coupled complexity of environments, intentions, and dynamics. In this paper, we propose a hierarchical driving model with explicit model of continuous intention and continuous dynamics, which decouples the complexity in the observation-to-action reasoning in the human driving data. Specifically, the continuous intention module takes the route planning map obtained by GPS and IMU, perception from a RGB camera and LiDAR as input to generate a potential map encoded with obstacles and intentions being expressed as grid based potentials. Then, the potential map is regarded as a condition, together with the current dynamics, to generate a continuous trajectory as output by a continuous function approximator network, whose derivatives can be used for supervision without additional parameters. Finally, we validate our method on both datasets and simulator, demonstrating that our method has higher prediction accuracy of displacement and velocity and generates smoother trajectories. The method is also deployed on the real vehicle with loop latency, validating its effectiveness. To the best of our knowledge, this is the first work to produce the driving trajectory using a continuous function approximator network.
View on arXiv