Towards Optimal Differentially Private Regret Bounds in Linear MDPs

We study regret minimization under privacy constraints in episodic inhomogeneous linear Markov Decision Processes (MDPs), motivated by the growing use of reinforcement learning (RL) in personalized decision-making systems that rely on sensitive user data. In this setting, both transition probabilities and reward functions are assumed to be linear in a feature mapping , and we aim to ensure privacy through joint differential privacy (JDP), a relaxation of differential privacy suited to online learning. Prior work has established suboptimal regret bounds by privatizing the LSVI-UCB algorithm, which achieves regret in the non-private setting. Building on recent advances that improve this to near minimax optimal regret via LSVI-UCB++ with Bernstein-style bonuses, we design a new differentially private algorithm by privatizing LSVI-UCB++ and adapting techniques for variance-aware analysis from offline RL. Our algorithm achieves a regret bound of , improving over previous private methods. Empirical results show that our algorithm retains near-optimal utility compared to non-private baselines, indicating that privacy can be achieved with minimal performance degradation in this setting.
View on arXiv@article{sahu2025_2504.09339, title={ Towards Optimal Differentially Private Regret Bounds in Linear MDPs }, author={ Sharan Sahu }, journal={arXiv preprint arXiv:2504.09339}, year={ 2025 } }