ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01437
32
0

Eau De QQQ-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning

3 March 2025
Théo Vincent
Tim Lukas Faust
Yogesh Tripathi
Jan Peters
Carlo DÉramo
ArXivPDFHTML
Abstract

Recent works have successfully demonstrated that sparse deep reinforcement learning agents can be competitive against their dense counterparts. This opens up opportunities for reinforcement learning applications in fields where inference time and memory requirements are cost-sensitive or limited by hardware. Until now, dense-to-sparse methods have relied on hand-designed sparsity schedules that are not synchronized with the agent's learning pace. Crucially, the final sparsity level is chosen as a hyperparameter, which requires careful tuning as setting it too high might lead to poor performances. In this work, we address these shortcomings by crafting a dense-to-sparse algorithm that we name Eau De QQQ-Network (EauDeQN). To increase sparsity at the agent's learning pace, we consider multiple online networks with different sparsity levels, where each online network is trained from a shared target network. At each target update, the online network with the smallest loss is chosen as the next target network, while the other networks are replaced by a pruned version of the chosen network. We evaluate the proposed approach on the Atari 260026002600 benchmark and the MuJoCo physics simulator, showing that EauDeQN reaches high sparsity levels while keeping performances high.

View on arXiv
@article{vincent2025_2503.01437,
  title={ Eau De $Q$-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning },
  author={ Théo Vincent and Tim Faust and Yogesh Tripathi and Jan Peters and Carlo DÉramo },
  journal={arXiv preprint arXiv:2503.01437},
  year={ 2025 }
}
Comments on this paper