ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.20106
42
0

Adaptive Helpfulness-Harmlessness Alignment with Preference Vectors

27 April 2025
Ren-Wei Liang
Chin-Ting Hsu
Chan-Hung Yu
Saransh Agrawal
Shih-Cheng Huang
Shang-Tse Chen
Kuan-Hao Huang
Shao-Hua Sun
ArXivPDFHTML
Abstract

Ensuring that large language models (LLMs) are both helpful and harmless is a critical challenge, as overly strict constraints can lead to excessive refusals, while permissive models risk generating harmful content. Existing approaches, such as reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO), attempt to balance these trade-offs but suffer from performance conflicts, limited controllability, and poor extendability. To address these issues, we propose Preference Vector, a novel framework inspired by task arithmetic. Instead of optimizing multiple preferences within a single objective, we train separate models on individual preferences, extract behavior shifts as preference vectors, and dynamically merge them at test time. This modular approach enables fine-grained, user-controllable preference adjustments and facilitates seamless integration of new preferences without retraining. Experiments show that our proposed Preference Vector framework improves helpfulness without excessive conservatism, allows smooth control over preference trade-offs, and supports scalable multi-preference alignment.

View on arXiv
@article{liang2025_2504.20106,
  title={ Adaptive Helpfulness-Harmlessness Alignment with Preference Vectors },
  author={ Ren-Wei Liang and Chin-Ting Hsu and Chan-Hung Yu and Saransh Agrawal and Shih-Cheng Huang and Shang-Tse Chen and Kuan-Hao Huang and Shao-Hua Sun },
  journal={arXiv preprint arXiv:2504.20106},
  year={ 2025 }
}
Comments on this paper