ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.02795
398
4
v1v2v3 (latest)

InfiFusion: A Unified Framework for Enhanced Cross-Model Reasoning via LLM Fusion

6 January 2025
Zhaoyi Yan
Zhijie Sang
Yiming Zhang
Yuhao Fu
Baoyi He
Qi Zhou
Yining Di
Chunlin Ji
Shengyu Zhang
Leilei Gan
    MoMeLRM
ArXiv (abs)PDFHTML
Main:8 Pages
2 Figures
Bibliography:3 Pages
9 Tables
Appendix:3 Pages
Abstract

We introduce InfiFusion, an efficient training pipeline designed to integrate multiple domain-specialized Large Language Models (LLMs) into a single pivot model, effectively harnessing the strengths of each source model. Traditional fusion methods either merge model parameters directly or rely on knowledge distillation with rigid assumptions, limiting their flexibility and efficiency. InfiFusion overcomes these limitations by enhancing Universal Logit Distillation (ULD) with Top-K selection and Logits Standardization. We propose two fusion strategies: Pairwise Fusion (InfiFusionp_pp​), where each source model knowledge is distilled individually into the pivot model followed by merging and Unified Fusion (InfiFusionu_uu​), where knowledge from all source models is distilled simultaneously into the pivot model. InfiFusion outperforms the state-of-the-art models, such as Qwen-2.5-14B-Instruct and Phi-4, across 11 widely applied benchmarks covering reasoning, coding, mathematics, and instruction-following tasks. Notably, InfiFusion achieves this superior performance while significantly reduces computational costs, completing full training with only 160 H800 GPU hours compared to the millions typically required for traditional LLM training.

View on arXiv
Comments on this paper