32
0

Parental Guidance: Efficient Lifelong Learning through Evolutionary Distillation

Abstract

Developing robotic agents that can perform well in diverse environments while showing a variety of behaviors is a key challenge in AI and robotics. Traditional reinforcement learning (RL) methods often create agents that specialize in narrow tasks, limiting their adaptability and diversity. To overcome this, we propose a preliminary, evolution-inspired framework that includes a reproduction module, similar to natural species reproduction, balancing diversity and specialization. By integrating RL, imitation learning (IL), and a coevolutionary agent-terrain curriculum, our system evolves agents continuously through complex tasks. This approach promotes adaptability, inheritance of useful traits, and continual learning. Agents not only refine inherited skills but also surpass their predecessors. Our initial experiments show that this method improves exploration efficiency and supports open-ended learning, offering a scalable solution where sparse reward coupled with diverse terrain environments induces a multi-task setting.

View on arXiv
@article{zhang2025_2503.18531,
  title={ Parental Guidance: Efficient Lifelong Learning through Evolutionary Distillation },
  author={ Octi Zhang and Quanquan Peng and Rosario Scalise and Bryon Boots },
  journal={arXiv preprint arXiv:2503.18531},
  year={ 2025 }
}
Comments on this paper