User Tampering in Reinforcement Learning Recommender Systems
- OffRLAAML
This paper provides novel formal methods and empirical demonstrations of a particular safety concern in reinforcement learning (RL)-based recommendation algorithms. We call this safety concern `user tampering' -- a phenomenon whereby an RL-based recommender system might manipulate a media user's opinions via its recommendations as part of a policy to increase long-term user engagement. We then apply techniques from causal modelling to analyse the leading approaches in the literature for implementing scalable RL-based recommenders, and we observe that the current approaches permit user tampering. Additionally, we review the existing mitigation strategies for reward tampering problems and show that they do not transfer well to the user tampering phenomenon found in the recommendation context. Furthermore, we provide a simulation study of a media RL-based recommendation problem constrained to the recommendation of political content. We show that a Q-learning algorithm consistently learns to exploit its opportunities to polarise simulated users with its early recommendations in order to have more consistent success with later recommendations catering to that polarisation. This latter contribution calls for urgency in designing safer RL-based recommenders; the former suggests that creating such safe recommenders will require a fundamental shift in design away from the approaches we have seen in the recent literature.
View on arXiv