55
0

Motion Control in Multi-Rotor Aerial Robots Using Deep Reinforcement Learning

Abstract

This paper investigates the application of Deep Reinforcement (DRL) Learning to address motion control challenges in drones for additive manufacturing (AM). Drone-based additive manufacturing promises flexible and autonomous material deposition in large-scale or hazardous environments. However, achieving robust real-time control of a multi-rotor aerial robot under varying payloads and potential disturbances remains challenging. Traditional controllers like PID often require frequent parameter re-tuning, limiting their applicability in dynamic scenarios. We propose a DRL framework that learns adaptable control policies for multi-rotor drones performing waypoint navigation in AM tasks. We compare Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic Policy Gradient (TD3) within a curriculum learning scheme designed to handle increasing complexity. Our experiments show TD3 consistently balances training stability, accuracy, and success, particularly when mass variability is introduced. These findings provide a scalable path toward robust, autonomous drone control in additive manufacturing.

View on arXiv
@article{shetty2025_2502.05996,
  title={ Motion Control in Multi-Rotor Aerial Robots Using Deep Reinforcement Learning },
  author={ Gaurav Shetty and Mahya Ramezani and Hamed Habibi and Holger Voos and Jose Luis Sanchez-Lopez },
  journal={arXiv preprint arXiv:2502.05996},
  year={ 2025 }
}
Comments on this paper