Llama 2: Open Foundation and Fine-Tuned Chat Models
Hugo Touvron
Louis Martin
Kevin R. Stone
Peter Albert
Amjad Almahairi
Yasmine Babaei
Nikolay Bashlykov
Soumya Batra
Prajjwal Bhargava
Shruti Bhosale
Daniel M. Bikel
Lukas Blecher
Cristian Canton Ferrer
Moya Chen
Guillem Cucurull
David Esiobu
Jude Fernandes
Jeremy Fu
Wenyin Fu
Brian Fuller
Cynthia Gao
Vedanuj Goswami
Naman Goyal
Anthony Hartshorn
Saghar Hosseini
Rui Hou
Hakan Inan
Marcin Kardas
Viktor Kerkez
Madian Khabsa
Isabel Kloumann
Artem Korenev
Punit Singh Koura
Marie-Anne Lachaux
Thibaut Lavril
Jenya Lee
Diana Liskovich
Yinghai Lu
Yuning Mao
Xavier Martinet
Todor Mihaylov
Pushkar Mishra
Igor Molybog
Yixin Nie
Andrew Poulton
Jeremy Reizenstein
Rashi Rungta
Kalyan Saladi
Alan Schelten
Ruan Silva
Eric Michael Smith
R. Subramanian
Xia Tan
Binh Tang
Ross Taylor
Adina Williams
Jian Xiang Kuan
Puxin Xu
Zhengxu Yan
Iliyan Zarov
Yuchen Zhang
Angela Fan
Melanie Kambadur
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Sergey Edunov
Thomas Scialom

Abstract
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
View on arXivComments on this paper