ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.16609
27
1551

Qwen Technical Report

28 September 2023
Jinze Bai
Shuai Bai
Yunfei Chu
Zeyu Cui
Kai Dang
Xiaodong Deng
Yang Fan
Wenbin Ge
Yu Han
Fei Huang
Binyuan Hui
Luo Ji
Mei Li
Junyang Lin
Runji Lin
Dayiheng Liu
Gao Liu
Chengqiang Lu
K. Lu
Jianxin Ma
Rui Men
Xingzhang Ren
Xuancheng Ren
Chuanqi Tan
Sinan Tan
Jianhong Tu
Peng Wang
Shijie Wang
Wei Wang
Shengguang Wu
Benfeng Xu
Jin Xu
An Yang
Hao Yang
Jian Yang
Shusheng Yang
Yang Yao
Bowen Yu
Hongyi Yuan
Zheng Yuan
Jianwei Zhang
Xinyu Zhang
Yichang Zhang
Zhenru Zhang
Chang Zhou
Jingren Zhou
Xiaohuan Zhou
Tianhang Zhu
    OSLM
ArXivPDFHTML
Abstract

Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.

View on arXiv
Comments on this paper