ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.12425
8
10

L2^{2}2NAS: Learning to Optimize Neural Architectures via Continuous-Action Reinforcement Learning

25 September 2021
Keith G. Mills
Fred X. Han
Mohammad Salameh
Seyed Saeed Changiz Rezaei
Linglong Kong
Wei Lu
Shuo Lian
Shangling Jui
Di Niu
ArXivPDFHTML
Abstract

Neural architecture search (NAS) has achieved remarkable results in deep neural network design. Differentiable architecture search converts the search over discrete architectures into a hyperparameter optimization problem which can be solved by gradient descent. However, questions have been raised regarding the effectiveness and generalizability of gradient methods for solving non-convex architecture hyperparameter optimization problems. In this paper, we propose L2^{2}2NAS, which learns to intelligently optimize and update architecture hyperparameters via an actor neural network based on the distribution of high-performing architectures in the search history. We introduce a quantile-driven training procedure which efficiently trains L2^{2}2NAS in an actor-critic framework via continuous-action reinforcement learning. Experiments show that L2^{2}2NAS achieves state-of-the-art results on NAS-Bench-201 benchmark as well as DARTS search space and Once-for-All MobileNetV3 search space. We also show that search policies generated by L2^{2}2NAS are generalizable and transferable across different training datasets with minimal fine-tuning.

View on arXiv
Comments on this paper