ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.17479
45
0

DFPE: A Diverse Fingerprint Ensemble for Enhancing LLM Performance

29 January 2025
Seffi Cohen
Niv Goldshlager
Nurit Cohen-Inger
Bracha Shapira
L. Rokach
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have shown remarkable capabilities across various natural language processing tasks but often struggle to excel uniformly in diverse or complex domains. We propose a novel ensemble method - Diverse Fingerprint Ensemble (DFPE), which leverages the complementary strengths of multiple LLMs to achieve more robust performance. Our approach involves: (1) clustering models based on response "fingerprints" patterns, (2) applying a quantile-based filtering mechanism to remove underperforming models at a per-subject level, and (3) assigning adaptive weights to remaining models based on their subject-wise validation accuracy. In experiments on the Massive Multitask Language Understanding (MMLU) benchmark, DFPE outperforms the best single model by 3% overall accuracy and 5% in discipline-level accuracy. This method increases the robustness and generalization of LLMs and underscores how model selection, diversity preservation, and performance-driven weighting can effectively address challenging, multi-faceted language understanding tasks.

View on arXiv
@article{cohen2025_2501.17479,
  title={ DFPE: A Diverse Fingerprint Ensemble for Enhancing LLM Performance },
  author={ Seffi Cohen and Niv Goldshlager and Nurit Cohen-Inger and Bracha Shapira and Lior Rokach },
  journal={arXiv preprint arXiv:2501.17479},
  year={ 2025 }
}
Comments on this paper