ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.09759
  4. Cited By
Efficient Language Adaptive Pre-training: Extending State-of-the-Art
  Large Language Models for Polish

Efficient Language Adaptive Pre-training: Extending State-of-the-Art Large Language Models for Polish

15 February 2024
Szymon Ruciñski
ArXivPDFHTML

Papers citing "Efficient Language Adaptive Pre-training: Extending State-of-the-Art Large Language Models for Polish"

2 / 2 papers shown
Title
To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis
To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis
Fuzhao Xue
Yao Fu
Wangchunshu Zhou
Zangwei Zheng
Yang You
79
74
0
22 May 2023
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
220
3,054
0
23 Jan 2020
1