ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15011
14
0

HAVA: Hybrid Approach to Value-Alignment through Reward Weighing for Reinforcement Learning

21 May 2025
Kryspin Varys
Federico Cerutti
Adam Sobey
Timothy J. Norman
ArXivPDFHTML
Abstract

Our society is governed by a set of norms which together bring about the values we cherish such as safety, fairness or trustworthiness. The goal of value-alignment is to create agents that not only do their tasks but through their behaviours also promote these values. Many of the norms are written as laws or rules (legal / safety norms) but even more remain unwritten (social norms). Furthermore, the techniques used to represent these norms also differ. Safety / legal norms are often represented explicitly, for example, in some logical language while social norms are typically learned and remain hidden in the parameter space of a neural network. There is a lack of approaches in the literature that could combine these various norm representations into a single algorithm. We propose a novel method that integrates these norms into the reinforcement learning process. Our method monitors the agent's compliance with the given norms and summarizes it in a quantity we call the agent's reputation. This quantity is used to weigh the received rewards to motivate the agent to become value-aligned. We carry out a series of experiments including a continuous state space traffic problem to demonstrate the importance of the written and unwritten norms and show how our method can find the value-aligned policies. Furthermore, we carry out ablations to demonstrate why it is better to combine these two groups of norms rather than using either separately.

View on arXiv
@article{varys2025_2505.15011,
  title={ HAVA: Hybrid Approach to Value-Alignment through Reward Weighing for Reinforcement Learning },
  author={ Kryspin Varys and Federico Cerutti and Adam Sobey and Timothy J. Norman },
  journal={arXiv preprint arXiv:2505.15011},
  year={ 2025 }
}
Comments on this paper