Reinforcement Learning-based Fault-Tolerant Control for Quadrotor with Online Transformer Adaptation

Multirotors play a significant role in diverse field robotics applications but remain highly susceptible to actuator failures, leading to rapid instability and compromised mission reliability. While various fault-tolerant control (FTC) strategies using reinforcement learning (RL) have been widely explored, most previous approaches require prior knowledge of the multirotor model or struggle to adapt to new configurations. To address these limitations, we propose a novel hybrid RL-based FTC framework integrated with a transformer-based online adaptation module. Our framework leverages a transformer architecture to infer latent representations in real time, enabling adaptation to previously unseen system models without retraining. We evaluate our method in a PyBullet simulation under loss-of-effectiveness actuator faults, achieving a 95% success rate and a positional root mean square error (RMSE) of 0.129 m, outperforming existing adaptation methods with 86% success and an RMSE of 0.153 m. Further evaluations on quadrotors with varying configurations confirm the robustness of our framework across untrained dynamics. These results demonstrate the potential of our framework to enhance the adaptability and reliability of multirotors, enabling efficient fault management in dynamic and uncertain environments. Website is available atthis http URL
View on arXiv@article{kim2025_2505.08223, title={ Reinforcement Learning-based Fault-Tolerant Control for Quadrotor with Online Transformer Adaptation }, author={ Dohyun Kim and Jayden Dongwoo Lee and Hyochoong Bang and Jungho Bae }, journal={arXiv preprint arXiv:2505.08223}, year={ 2025 } }