ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07329
36
0

Assessing the Macro and Micro Effects of Random Seeds on Fine-Tuning Large Language Models

10 March 2025
Hao Zhou
Guergana Savova
Lijing Wang
ArXivPDFHTML
Abstract

The impact of random seeds in fine-tuning large language models (LLMs) has been largely overlooked despite its potential influence on modelthis http URLthis study, we systematically evaluate the effects of random seeds on LLMs using the GLUE and SuperGLUE benchmarks. We analyze the macro-level impact through traditional metrics like accuracy and F1, calculating their mean and variance to quantify performance fluctuations. To capture the micro-level effects, we introduce a novel metric, consistency, measuring the stability of individual predictions across runs. Our experiments reveal significant variance at both macro and micro levels, underscoring the need for careful consideration of random seeds in fine-tuning and evaluation.

View on arXiv
@article{zhou2025_2503.07329,
  title={ Assessing the Macro and Micro Effects of Random Seeds on Fine-Tuning Large Language Models },
  author={ Hao Zhou and Guergana Savova and Lijing Wang },
  journal={arXiv preprint arXiv:2503.07329},
  year={ 2025 }
}
Comments on this paper