ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.14196
  4. Cited By
DeepSeek-Coder: When the Large Language Model Meets Programming -- The
  Rise of Code Intelligence

DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence

25 January 2024
Daya Guo
Qihao Zhu
Dejian Yang
Zhenda Xie
Kai Dong
Wentao Zhang
Guanting Chen
Xiao Bi
Yu-Huan Wu
Y. K. Li
Fuli Luo
Yingfei Xiong
W. Liang
    ELM
ArXivPDFHTML

Papers citing "DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence"

6 / 106 papers shown
Title
Exploring and Evaluating Hallucinations in LLM-Powered Code Generation
Exploring and Evaluating Hallucinations in LLM-Powered Code Generation
Fang Liu
Yang Liu
Lin Shi
Houkun Huang
Ruifeng Wang
Zhen Yang
Li Zhang
Zhongqi Li
Yuchi Ma
41
103
0
01 Apr 2024
FoC: Figure out the Cryptographic Functions in Stripped Binaries with LLMs
FoC: Figure out the Cryptographic Functions in Stripped Binaries with LLMs
Guoqiang Chen
Xiuwei Shang
Shaoyin Cheng
Yanming Zhang
Weiming Zhang
Neng H. Yu
N. Yu
92
2
0
27 Mar 2024
Large Language Models: A Survey
Large Language Models: A Survey
Shervin Minaee
Tomáš Mikolov
Narjes Nikzad
M. Asgari-Chenaghlu
R. Socher
Xavier Amatriain
Jianfeng Gao
ALM
LM&MA
ELM
115
347
0
09 Feb 2024
A Survey on Natural Language Processing for Programming
A Survey on Natural Language Processing for Programming
Qingfu Zhu
Xianzhen Luo
Fang Liu
Cuiyun Gao
Wanxiang Che
13
1
0
12 Dec 2022
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
204
1,451
0
02 Sep 2021
Deduplicating Training Data Makes Language Models Better
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
237
588
0
14 Jul 2021
Previous
123