ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04346
45
0

Adding Alignment Control to Language Models

6 March 2025
Wenhong Zhu
Weinan Zhang
Rui Wang
ArXivPDFHTML
Abstract

Post-training alignment has increasingly become a crucial factor in enhancing the usability of language models (LMs). However, the strength of alignment varies depending on individual preferences. This paper proposes a method to incorporate alignment control into a single model, referred to as CLM. This approach adds one identity layer preceding the initial layers and performs preference learning only on this layer to map unaligned input token embeddings into the aligned space. Experimental results demonstrate that this efficient fine-tuning method performs comparable to full fine-tuning. During inference, the input embeddings are processed through the aligned and unaligned layers, which are then merged through the interpolation coefficient. By controlling this parameter, the alignment exhibits a clear interpolation and extrapolation phenomenon.

View on arXiv
@article{zhu2025_2503.04346,
  title={ Adding Alignment Control to Language Models },
  author={ Wenhong Zhu and Weinan Zhang and Rui Wang },
  journal={arXiv preprint arXiv:2503.04346},
  year={ 2025 }
}
Comments on this paper