51
0

Towards Harnessing the Collaborative Power of Large and Small Models for Domain Tasks

Abstract

Large language models (LLMs) have demonstrated remarkable capabilities, but they require vast amounts of data and computational resources. In contrast, smaller models (SMs), while less powerful, can be more efficient and tailored to specific domains. In this position paper, we argue that taking a collaborative approach, where large and small models work synergistically, can accelerate the adaptation of LLMs to private domains and unlock new potential in AI. We explore various strategies for model collaboration and identify potential challenges and opportunities. Building upon this, we advocate for industry-driven research that prioritizes multi-objective benchmarks on real-world private datasets and applications.

View on arXiv
@article{liu2025_2504.17421,
  title={ Towards Harnessing the Collaborative Power of Large and Small Models for Domain Tasks },
  author={ Yang Liu and Bingjie Yan and Tianyuan Zou and Jianqing Zhang and Zixuan Gu and Jianbing Ding and Xidong Wang and Jingyi Li and Xiaozhou Ye and Ye Ouyang and Qiang Yang and Ya-Qin Zhang },
  journal={arXiv preprint arXiv:2504.17421},
  year={ 2025 }
}
Comments on this paper