ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.08463
19
0

RepCali: High Efficient Fine-tuning Via Representation Calibration in Latent Space for Pre-trained Language Models

13 May 2025
Fujun Zhang
Xiangdong Su
ArXivPDFHTML
Abstract

Fine-tuning pre-trained language models (PLMs) has become a dominant paradigm in applying PLMs to downstream tasks. However, with limited fine-tuning, PLMs still struggle with the discrepancies between the representation obtained from the PLMs' encoder and the optimal input to the PLMs' decoder. This paper tackles this challenge by learning to calibrate the representation of PLMs in the latent space. In the proposed representation calibration method (RepCali), we integrate a specific calibration block to the latent space after the encoder and use the calibrated output as the decoder input. The merits of the proposed RepCali include its universality to all PLMs with encoder-decoder architectures, its plug-and-play nature, and ease of implementation. Extensive experiments on 25 PLM-based models across 8 tasks (including both English and Chinese datasets) demonstrate that the proposed RepCali offers desirable enhancements to PLMs (including LLMs) and significantly improves the performance of downstream tasks. Comparison experiments across 4 benchmark tasks indicate that RepCali is superior to the representative fine-tuning baselines.

View on arXiv
@article{zhang2025_2505.08463,
  title={ RepCali: High Efficient Fine-tuning Via Representation Calibration in Latent Space for Pre-trained Language Models },
  author={ Fujun Zhang and XiangDong Su },
  journal={arXiv preprint arXiv:2505.08463},
  year={ 2025 }
}
Comments on this paper