Fisher Task Distance and Its Applications in Neural Architecture Search
and Transfer Learning
- FedML
We formulate an asymmetric (or non-commutative) distance between tasks based on FisherInformation Matrices. We provide a proof of consistency for our distance through theorems and experiments on various classification tasks. We then apply our proposed measure of task distance in transfer learning on visual tasks in the Taskonomy dataset. Additionally, we show how the proposed distance between a target task and a set of baseline tasks can be used to reduce the neural architecture search space for the target task. The complexity reduction in search space for task-specific architectures is achieved by building on the optimized architectures for similar tasks instead of doing a full search and without using this side information. Experimental results demonstrate the efficacy of the proposed approach and its improvements over other methods.
View on arXiv