Comparing Behavioural Cloning and Reinforcement Learning for Spacecraft Guidance and Control Networks
Guidance & control networks (G&CNETs) provide a promising alternative to on-board guidance and control (G&C) architectures for spacecraft, offering a differentiable, end-to-end representation of the guidance and control architecture. When training G&CNETs, two predominant paradigms emerge: behavioural cloning (BC), which mimics optimal trajectories, and reinforcement learning (RL), which learns optimal behaviour through trials and errors. Although both approaches have been adopted in G&CNET related literature, direct comparisons are notably absent. To address this, we conduct a systematic evaluation of BC and RL specifically for training G&CNETs on continuous-thrust spacecraft trajectory optimisation tasks. We introduce a novel RL training framework tailored to G&CNETs, incorporating decoupled action and control frequencies alongside reward redistribution strategies to stabilise training and to provide a fair comparison. Our results show that BC-trained G&CNETs excel at closely replicating expert policy behaviour, and thus the optimal control structure of a deterministic environment, but can be negatively constrained by the quality and coverage of the training dataset. In contrast RL-trained G&CNETs, beyond demonstrating a superior adaptability to stochastic conditions, can also discover solutions that improve upon suboptimal expert demonstrations, sometimes revealing globally optimal strategies that eluded the generation of training samples.
View on arXiv