Beyond Task Diversity: Provable Representation Transfer for Sequential Multi-Task Linear Bandits

We study lifelong learning in linear bandits, where a learner interacts with a sequence of linear bandit tasks whose parameters lie in an -dimensional subspace of , thereby sharing a low-rank representation. Current literature typically assumes that the tasks are diverse, i.e., their parameters uniformly span the -dimensional subspace. This assumption allows the low-rank representation to be learned before all tasks are revealed, which can be unrealistic in real-world applications. In this work, we present the first nontrivial result for sequential multi-task linear bandits without the task diversity assumption. We develop an algorithm that efficiently learns and transfers low-rank representations. When facing tasks, each played over rounds, our algorithm achieves a regret guarantee of under the ellipsoid action set assumption. This result can significantly improve upon the baseline of that does not leverage the low-rank structure when the number of tasks is sufficiently large and . We also demonstrate empirically on synthetic data that our algorithm outperforms baseline algorithms, which rely on the task diversity assumption.
View on arXiv