AgentForge: A Flexible Low-Code Platform for Reinforcement Learning Agent Design

Developing a reinforcement learning (RL) agent often involves identifying values for numerous parameters, covering the policy, reward function, environment, and agent-internal architecture. Since these parameters are interrelated in complex ways, optimizing them is a black-box problem that proves especially challenging for nonexperts. Although existing optimization-as-a-service platforms (e.g., Vizier and Optuna) can handle such problems, they are impractical for RL systems, since the need for manual user mapping of each parameter to distinct components makes the effort cumbersome. It also requires understanding of the optimization process, limiting the systems' application beyond the machine learning field and restricting access in areas such as cognitive science, which models human decision-making. To tackle these challenges, the paper presents AgentForge, a flexible low-code platform to optimize any parameter set across an RL system. Available atthis https URL, it allows an optimization problem to be defined in a few lines of code and handed to any of the interfaced optimizers. With AgentForge, the user can optimize the parameters either individually or jointly. The paper presents an evaluation of its performance for a challenging vision-based RL problem.
View on arXiv@article{junior2025_2410.19528, title={ AgentForge: A Flexible Low-Code Platform for Reinforcement Learning Agent Design }, author={ Francisco Erivaldo Fernandes Junior and Antti Oulasvirta }, journal={arXiv preprint arXiv:2410.19528}, year={ 2025 } }