ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.13011
74
1

MONA: Myopic Optimization with Non-myopic Approval Can Mitigate Multi-step Reward Hacking

22 January 2025
Sebastian Farquhar
Vikrant Varma
David Lindner
David Elson
Caleb Biddulph
Ian Goodfellow
Rohin Shah
ArXivPDFHTML
Abstract

Future advanced AI systems may learn sophisticated strategies through reinforcement learning (RL) that humans cannot understand well enough to safely evaluate. We propose a training method which avoids agents learning undesired multi-step plans that receive high reward (multi-step "reward hacks") even if humans are not able to detect that the behaviour is undesired. The method, Myopic Optimization with Non-myopic Approval (MONA), works by combining short-sighted optimization with far-sighted reward. We demonstrate that MONA can prevent multi-step reward hacking that ordinary RL causes, even without being able to detect the reward hacking and without any extra information that ordinary RL does not get access to. We study MONA empirically in three settings which model different misalignment failure modes including 2-step environments with LLMs representing delegated oversight and encoded reasoning and longer-horizon gridworld environments representing sensor tampering.

View on arXiv
@article{farquhar2025_2501.13011,
  title={ MONA: Myopic Optimization with Non-myopic Approval Can Mitigate Multi-step Reward Hacking },
  author={ Sebastian Farquhar and Vikrant Varma and David Lindner and David Elson and Caleb Biddulph and Ian Goodfellow and Rohin Shah },
  journal={arXiv preprint arXiv:2501.13011},
  year={ 2025 }
}
Comments on this paper