ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15783
39
0

Grammar and Gameplay-aligned RL for Game Description Generation with LLMs

20 March 2025
Tsunehiko Tanaka
Edgar Simo-Serra
ArXivPDFHTML
Abstract

Game Description Generation (GDG) is the task of generating a game description written in a Game Description Language (GDL) from natural language text. Previous studies have explored generation methods leveraging the contextual understanding capabilities of Large Language Models (LLMs); however, accurately reproducing the game features of the game descriptions remains a challenge. In this paper, we propose reinforcement learning-based fine-tuning of LLMs for GDG (RLGDG). Our training method simultaneously improves grammatical correctness and fidelity to game concepts by introducing both grammar rewards and concept rewards. Furthermore, we adopt a two-stage training strategy where Reinforcement Learning (RL) is applied following Supervised Fine-Tuning (SFT). Experimental results demonstrate that our proposed method significantly outperforms baseline methods using SFT alone.

View on arXiv
@article{tanaka2025_2503.15783,
  title={ Grammar and Gameplay-aligned RL for Game Description Generation with LLMs },
  author={ Tsunehiko Tanaka and Edgar Simo-Serra },
  journal={arXiv preprint arXiv:2503.15783},
  year={ 2025 }
}
Comments on this paper