ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07424
24
0

Plug-in and Fine-tuning: Bridging the Gap between Small Language Models and Large Language Models

9 June 2025
Kyeonghyun Kim
Jinhee Jang
Juhwan Choi
Yoonji Lee
Kyohoon Jin
Youngbin Kim
ArXiv (abs)PDFHTML
Main:7 Pages
3 Figures
Bibliography:5 Pages
26 Tables
Appendix:7 Pages
Abstract

Large language models (LLMs) are renowned for their extensive linguistic knowledge and strong generalization capabilities, but their high computational demands make them unsuitable for resource-constrained environments. In contrast, small language models (SLMs) are computationally efficient but often lack the broad generalization capacity of LLMs. To bridge this gap, we propose PiFi, a novel framework that combines the strengths of both LLMs and SLMs to achieve high performance while maintaining efficiency. PiFi integrates a single frozen layer from an LLM into a SLM and fine-tunes the combined model for specific tasks, boosting performance without a significant increase in computational cost. We show that PiFi delivers consistent performance improvements across a range of natural language processing tasks, including both natural language understanding and generation. Moreover, our findings demonstrate PiFi's ability to effectively leverage LLM knowledge, enhancing generalization to unseen domains and facilitating the transfer of linguistic abilities.

View on arXiv
@article{kim2025_2506.07424,
  title={ Plug-in and Fine-tuning: Bridging the Gap between Small Language Models and Large Language Models },
  author={ Kyeonghyun Kim and Jinhee Jang and Juhwan Choi and Yoonji Lee and Kyohoon Jin and YoungBin Kim },
  journal={arXiv preprint arXiv:2506.07424},
  year={ 2025 }
}
Comments on this paper