19
3

Active Representation Learning for General Task Space with Applications in Robotics

Abstract

Representation learning based on multi-task pretraining has become a powerful approach in many domains. In particular, task-aware representation learning aims to learn an optimal representation for a specific target task by sampling data from a set of source tasks, while task-agnostic representation learning seeks to learn a universal representation for a class of tasks. In this paper, we propose a general and versatile algorithmic and theoretic framework for \textit{active representation learning}, where the learner optimally chooses which source tasks to sample from. This framework, along with a tractable meta algorithm, allows most arbitrary target and source task spaces (from discrete to continuous), covers both task-aware and task-agnostic settings, and is compatible with deep representation learning practices. We provide several instantiations under this framework, from bilinear and feature-based nonlinear to general nonlinear cases. In the bilinear case, by leveraging the non-uniform spectrum of the task representation and the calibrated source-target relevance, we prove that the sample complexity to achieve ε\varepsilon-excess risk on target scales with (k)2v22ε2 (k^*)^2 \|v^*\|_2^2 \varepsilon^{-2} where kk^* is the effective dimension of the target and v22(0,1]\|v^*\|_2^2 \in (0,1] represents the connection between source and target space. Compared to the passive one, this can save up to 1dW\frac{1}{d_W} of sample complexity, where dWd_W is the task space dimension. Finally, we demonstrate different instantiations of our meta algorithm in synthetic datasets and robotics problems, from pendulum simulations to real-world drone flight datasets. On average, our algorithms outperform baselines by 20%70%20\%-70\%.

View on arXiv
Comments on this paper