ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.00838
119
128

OLMo: Accelerating the Science of Language Models

1 February 2024
Dirk Groeneveld
Iz Beltagy
Pete Walsh
Akshita Bhagia
Rodney Michael Kinney
Oyvind Tafjord
A. Jha
Hamish Ivison
Ian H. Magnusson
Yizhong Wang
Shane Arora
David Atkinson
Russell Authur
Khyathi Raghavi Chandu
Arman Cohan
Jennifer Dumas
Yanai Elazar
Yuling Gu
Jack Hessel
Tushar Khot
William Merrill
Jacob Morrison
Niklas Muennighoff
Aakanksha Naik
Crystal Nam
Matthew E. Peters
Valentina Pyatkin
Abhilasha Ravichander
Dustin Schwenk
Saurabh Shah
Will Smith
Emma Strubell
Nishant Subramani
Mitchell Wortsman
Pradeep Dasigi
Nathan Lambert
Kyle Richardson
Luke Zettlemoyer
Jesse Dodge
Kyle Lo
Luca Soldaini
Noah A. Smith
Hanna Hajishirzi
    OSLM
ArXivPDFHTML
Abstract

Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces, with important details of their training data, architectures, and development undisclosed. Given the importance of these details in scientifically studying these models, including their biases and potential risks, we believe it is essential for the research community to have access to powerful, truly open LMs. To this end, we have built OLMo, a competitive, truly Open Language Model, to enable the scientific study of language models. Unlike most prior efforts that have only released model weights and inference code, we release OLMo alongside open training data and training and evaluation code. We hope this release will empower the open research community and inspire a new wave of innovation.

View on arXiv
Comments on this paper