14
0

PARM: Multi-Objective Test-Time Alignment via Preference-Aware Autoregressive Reward Model

Abstract

Multi-objective test-time alignment aims to adapt large language models (LLMs) to diverse multi-dimensional user preferences during inference while keeping LLMs frozen. Recently, GenARM (Xu et al., 2025) first independently trains Autoregressive Reward Models (ARMs) for each preference dimension without awareness of each other, then combines their outputs based on user-specific preference vectors during inference to achieve multi-objective test-time alignment, leading to two key limitations: the need for \textit{multiple} ARMs increases the inference cost, and the separate training of ARMs causes the misalignment between the guided generation and the user preferences. To address these issues, we propose Preference-aware ARM (PARM), a single unified ARM trained across all preference dimensions. PARM uses our proposed Preference-Aware Bilinear Low-Rank Adaptation (PBLoRA), which employs a bilinear form to condition the ARM on preference vectors, enabling it to achieve precise control over preference trade-offs during inference. Experiments demonstrate that PARM reduces inference costs and achieves better alignment with preference vectors compared with existing methods. Additionally, PARM enables weak-to-strong guidance, allowing a smaller PARM to guide a larger frozen LLM without expensive training, making multi-objective alignment accessible with limited computing resources. The code is available atthis https URL.

View on arXiv
@article{lin2025_2505.06274,
  title={ PARM: Multi-Objective Test-Time Alignment via Preference-Aware Autoregressive Reward Model },
  author={ Baijiong Lin and Weisen Jiang and Yuancheng Xu and Hao Chen and Ying-Cong Chen },
  journal={arXiv preprint arXiv:2505.06274},
  year={ 2025 }
}
Comments on this paper