FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural
Architecture Search
One of the most critical problems in neural architecture search is evaluation, which ranks various candidate models. Recently, there is a rapidly increasing interest in weight-sharing approaches. Although being very efficient with orders of magnitude faster than traditional methods, they are prone to misjudgments of candidate architectures. In this paper, we first prove in current one-shot weight-sharing approaches, biased evaluation is inevitable due to inherent unfairness. To rectify it, we propose two levels of fairness constraints: expectation fairness and strict fairness. Among several comparison groups, strict fairness works best both theoretically and empirically. Incorporating our supernet trained under such a constraint with a multi-objective evolutionary search algorithm, we obtain three state-of-the-art models on ImageNet. Especially, FairNAS-A attains 75.34% top-1 accuracy. Finally, we give an in-depth analysis of the proposed method. The models and their evaluation codes are made publicly available online http://github.com/fairnas/FairNAS.
View on arXiv