ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.00979
36
225

Acme: A Research Framework for Distributed Reinforcement Learning

1 June 2020
Matthew W. Hoffman
Bobak Shahriari
John Aslanides
Gabriel Barth-Maron
Nikola Momchev
Danila Sinopalnikov
Piotr Stańczyk
Sabela Ramos
Anton Raichuk
Damien Vincent
Léonard Hussenot
Robert Dadashi
Gabriel Dulac-Arnold
Manu Orsini
Alexis Jacq
Johan Ferret
Nino Vieillard
Seyed Kamyar Seyed Ghasemipour
Sertan Girgin
Olivier Pietquin
Feryal M. P. Behbahani
Tamara Norman
A. Abdolmaleki
Albin Cassirer
Fan Yang
Kate Baumli
Sarah Henderson
Abe Friesen
Ruba Haroun
Alexander Novikov
Sergio Gomez Colmenarejo
Serkan Cabi
Çağlar Gülçehre
T. Paine
Srivatsan Srinivasan
A. Cowie
Ziyun Wang
Bilal Piot
Nando de Freitas
ArXivPDFHTML
Abstract

Deep reinforcement learning (RL) has led to many recent and groundbreaking advances. However, these advances have often come at the cost of both increased scale in the underlying architectures being trained as well as increased complexity of the RL algorithms used to train them. These increases have in turn made it more difficult for researchers to rapidly prototype new ideas or reproduce published RL algorithms. To address these concerns this work describes Acme, a framework for constructing novel RL algorithms that is specifically designed to enable agents that are built using simple, modular components that can be used at various scales of execution. While the primary goal of Acme is to provide a framework for algorithm development, a secondary goal is to provide simple reference implementations of important or state-of-the-art algorithms. These implementations serve both as a validation of our design decisions as well as an important contribution to reproducibility in RL research. In this work we describe the major design decisions made within Acme and give further details as to how its components can be used to implement various algorithms. Our experiments provide baselines for a number of common and state-of-the-art algorithms as well as showing how these algorithms can be scaled up for much larger and more complex environments. This highlights one of the primary advantages of Acme, namely that it can be used to implement large, distributed RL algorithms that can run at massive scales while still maintaining the inherent readability of that implementation. This work presents a second version of the paper which coincides with an increase in modularity, additional emphasis on offline, imitation and learning from demonstrations algorithms, as well as various new agents implemented as part of Acme.

View on arXiv
Comments on this paper