ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.17467
  4. Cited By
CMR Scaling Law: Predicting Critical Mixture Ratios for Continual
  Pre-training of Language Models

CMR Scaling Law: Predicting Critical Mixture Ratios for Continual Pre-training of Language Models

24 July 2024
Jiawei Gu
Zacc Yang
Chuanghao Ding
Rui Zhao
Fei Tan
    CLL
ArXivPDFHTML

Papers citing "CMR Scaling Law: Predicting Critical Mixture Ratios for Continual Pre-training of Language Models"

4 / 4 papers shown
Title
Learning Dynamics in Continual Pre-Training for Large Language Models
Learning Dynamics in Continual Pre-Training for Large Language Models
Xingjin Wang
Howe Tissue
Lu Wang
Linjing Li
D. Zeng
CLL
16
0
0
12 May 2025
Selecting Large Language Model to Fine-tune via Rectified Scaling Law
Selecting Large Language Model to Fine-tune via Rectified Scaling Law
Haowei Lin
Baizhou Huang
Haotian Ye
Qinyu Chen
Zihao Wang
Sujian Li
Jianzhu Ma
Xiaojun Wan
James Y. Zou
Yitao Liang
82
20
0
04 Feb 2024
What Makes Pre-trained Language Models Better Zero-shot Learners?
What Makes Pre-trained Language Models Better Zero-shot Learners?
Jinghui Lu
Dongsheng Zhu
Weidong Han
Rui Zhao
Brian Mac Namee
Fei Tan
34
21
0
30 Sep 2022
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
220
3,054
0
23 Jan 2020
1