ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.19341
16
92

Skywork: A More Open Bilingual Foundation Model

30 October 2023
Tianwen Wei
Liang Zhao
Lichang Zhang
Bo Zhu
Lijie Wang
Haihua Yang
Biye Li
Cheng Cheng
Weiwei Lü
Rui Hu
Chenxia Li
Liu Yang
Xilin Luo
X. Wu
Lunan Liu
Wenjun Cheng
Peng Cheng
Jianhao Zhang
Xiaoyu Zhang
Lei Lin
Xiaokun Wang
Yutuan Ma
Chuanhai Dong
Yanqi Sun
Yifu Chen
Yongyi Peng
Xiaojuan Liang
Shuicheng Yan
Han Fang
Yahui Zhou
ArXivPDFHTML
Abstract

In this technical report, we present Skywork-13B, a family of large language models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both English and Chinese texts. This bilingual foundation model is the most extensively trained and openly published LLMs of comparable size to date. We introduce a two-stage training methodology using a segmented corpus, targeting general purpose training and then domain-specific enhancement training, respectively. We show that our model not only excels on popular benchmarks, but also achieves \emph{state of the art} performance in Chinese language modeling on diverse domains. Furthermore, we propose a novel leakage detection method, demonstrating that test data contamination is a pressing issue warranting further investigation by the LLM community. To spur future research, we release Skywork-13B along with checkpoints obtained during intermediate stages of the training process. We are also releasing part of our SkyPile corpus, a collection of over 150 billion tokens of web text, which is the largest high quality open Chinese pre-training corpus to date. We hope Skywork-13B and our open corpus will serve as a valuable open-source resource to democratize access to high-quality LLMs.

View on arXiv
Comments on this paper