ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.12950
  4. Cited By
Code Llama: Open Foundation Models for Code

Code Llama: Open Foundation Models for Code

24 August 2023
Baptiste Rozière
Jonas Gehring
Fabian Gloeckle
Sten Sootla
Itai Gat
Xiaoqing Ellen Tan
Yossi Adi
Jingyu Liu
Romain Sauvestre
Tal Remez
Jérémy Rapin
Artyom Kozhevnikov
Ivan Evtimov
Joanna Bitton
Manish P Bhatt
Cristian Canton Ferrer
Aaron Grattafiori
Wenhan Xiong
Alexandre Défossez
Jade Copet
Faisal Azhar
Hugo Touvron
Louis Martin
Nicolas Usunier
Thomas Scialom
Gabriel Synnaeve
    ELM
    ALM
ArXivPDFHTML

Papers citing "Code Llama: Open Foundation Models for Code"

16 / 216 papers shown
Title
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
In Gim
Guojun Chen
Seung-seob Lee
Nikhil Sarda
Anurag Khandelwal
Lin Zhong
22
71
0
07 Nov 2023
CodeChain: Towards Modular Code Generation Through Chain of
  Self-revisions with Representative Sub-modules
CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules
Hung Le
Hailin Chen
Amrita Saha
Akash Gokul
Doyen Sahoo
Shafiq R. Joty
LRM
23
41
0
13 Oct 2023
Cognitive Architectures for Language Agents
Cognitive Architectures for Language Agents
T. Sumers
Shunyu Yao
Karthik Narasimhan
Thomas L. Griffiths
LLMAG
LM&Ro
34
150
0
05 Sep 2023
Bias Testing and Mitigation in LLM-based Code Generation
Bias Testing and Mitigation in LLM-based Code Generation
Dong Huang
Qingwen Bu
Jie M. Zhang
Xiaofei Xie
Junjie Chen
Heming Cui
33
20
0
03 Sep 2023
Is Self-Repair a Silver Bullet for Code Generation?
Is Self-Repair a Silver Bullet for Code Generation?
Theo X. Olausson
J. Inala
Chenglong Wang
Jianfeng Gao
Armando Solar-Lezama
LRM
14
108
0
16 Jun 2023
AI-assisted Code Authoring at Scale: Fine-tuning, deploying, and mixed
  methods evaluation
AI-assisted Code Authoring at Scale: Fine-tuning, deploying, and mixed methods evaluation
V. Murali
C. Maddila
Imad Ahmad
Michael Bolin
Daniel Cheng
Negar Ghorbani
Renuka Fernandez
Nachiappan Nagappan
Peter C. Rigby
11
13
0
20 May 2023
CodeGen2: Lessons for Training LLMs on Programming and Natural Languages
CodeGen2: Lessons for Training LLMs on Programming and Natural Languages
Erik Nijkamp
A. Ghobadzadeh
Caiming Xiong
Silvio Savarese
Yingbo Zhou
141
163
0
03 May 2023
CodeLMSec Benchmark: Systematically Evaluating and Finding Security
  Vulnerabilities in Black-Box Code Language Models
CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models
Hossein Hajipour
Keno Hassler
Thorsten Holz
Lea Schonherr
Mario Fritz
ELM
22
19
0
08 Feb 2023
A Survey on Natural Language Processing for Programming
A Survey on Natural Language Processing for Programming
Qingfu Zhu
Xianzhen Luo
Fang Liu
Cuiyun Gao
Wanxiang Che
13
1
0
12 Dec 2022
CodeRL: Mastering Code Generation through Pretrained Models and Deep
  Reinforcement Learning
CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning
Hung Le
Yue Wang
Akhilesh Deepak Gotmare
Silvio Savarese
S. Hoi
SyDa
ALM
118
232
0
05 Jul 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
201
1,451
0
02 Sep 2021
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
234
690
0
27 Aug 2021
Measuring Coding Challenge Competence With APPS
Measuring Coding Challenge Competence With APPS
Dan Hendrycks
Steven Basart
Saurav Kadavath
Mantas Mazeika
Akul Arora
...
Collin Burns
Samir Puranik
Horace He
D. Song
Jacob Steinhardt
ELM
AIMat
ALM
194
614
0
20 May 2021
DOBF: A Deobfuscation Pre-Training Objective for Programming Languages
DOBF: A Deobfuscation Pre-Training Objective for Programming Languages
Baptiste Roziere
Marie-Anne Lachaux
Marc Szafraniec
Guillaume Lample
AI4CE
41
53
0
15 Feb 2021
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding
  and Generation
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Shuai Lu
Daya Guo
Shuo Ren
Junjie Huang
Alexey Svyatkovskiy
...
Nan Duan
Neel Sundaresan
Shao Kun Deng
Shengyu Fu
Shujie Liu
ELM
190
853
0
09 Feb 2021
Previous
12345