43
9

Black-box Prompt Tuning with Subspace Learning

Abstract

Black-box prompt tuning employs derivative-free optimization algorithms to learn prompts within low-dimensional subspaces rather than back-propagating through the network of Large Language Models (LLMs). Recent studies reveal that black-box prompt tuning lacks versatility across tasks and LLMs, which we believe is related to the suboptimal choice of subspaces. In this paper, we introduce Black-box prompt tuning with Subspace Learning (BSL) to enhance the versatility of black-box prompt tuning. Based on the assumption that nearly optimal prompts for similar tasks reside in a common subspace, we propose identifying such subspaces through meta-learning on a collection of similar source tasks. Consequently, for a target task that shares similarities with the source tasks, we expect that optimizing within the identified subspace can yield a prompt that performs well on the target task. Experimental results confirm that our BSL framework consistently achieves competitive performance across various downstream tasks and LLMs.

View on arXiv
Comments on this paper