Accommodating Picky Customers: Regret Bound and Exploration Complexity
for Multi-Objective Reinforcement Learning
In this paper we consider multi-objective reinforcement learning where the objectives are balanced using preferences. In practice, the preferences are often given in an adversarial manner, e.g., customers can be picky in many applications. We formalize this problem as an episodic learning problem on a Markov decision process, where transitions are unknown and a reward function is the inner product of a preference vector with pre-specified multi-objective reward functions. In the online setting, the agent receives a (adversarial) preference every episode and proposes policies to interact with the environment. We provide a model-based algorithm that achieves a regret bound , where is the number of objectives, is the number of states, is the number of actions, is the length of the horizon, and is the number of episodes. Furthermore, we consider preference-free exploration, i.e., the agent first interacts with the environment without specifying any preference and then is able to accommodate arbitrary preference vectors up to error. Our proposed algorithm is provably efficient with a nearly optimal sample complexity .
View on arXiv